Nov 25 19:33:36 crc systemd[1]: Starting Kubernetes Kubelet... Nov 25 19:33:37 crc restorecon[4740]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 19:33:37 crc restorecon[4740]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 19:33:37 crc restorecon[4740]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 25 19:33:38 crc kubenswrapper[4775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 19:33:38 crc kubenswrapper[4775]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 25 19:33:38 crc kubenswrapper[4775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 19:33:38 crc kubenswrapper[4775]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 19:33:38 crc kubenswrapper[4775]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 25 19:33:38 crc kubenswrapper[4775]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.549836 4775 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556477 4775 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556510 4775 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556520 4775 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556529 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556539 4775 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556550 4775 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556559 4775 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556567 4775 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556575 4775 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556583 4775 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556591 4775 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556599 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556610 4775 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556621 4775 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556629 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556638 4775 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556670 4775 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556679 4775 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556730 4775 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556741 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556750 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556758 4775 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556769 4775 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556779 4775 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556787 4775 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556797 4775 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556805 4775 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556813 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556821 4775 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556829 4775 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556836 4775 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556844 4775 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556852 4775 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556860 4775 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556868 4775 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556876 4775 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556883 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556892 4775 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556899 4775 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556908 4775 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556917 4775 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556925 4775 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556933 4775 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556940 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556948 4775 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556956 4775 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556964 4775 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556972 4775 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556979 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556987 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.556995 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557006 4775 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557015 4775 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557022 4775 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557030 4775 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557037 4775 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557044 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557053 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557060 4775 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557070 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557080 4775 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557090 4775 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557098 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557108 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557116 4775 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557124 4775 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557136 4775 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557145 4775 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557155 4775 feature_gate.go:330] unrecognized feature gate: Example Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557164 4775 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.557172 4775 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557321 4775 flags.go:64] FLAG: --address="0.0.0.0" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557338 4775 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557357 4775 flags.go:64] FLAG: --anonymous-auth="true" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557370 4775 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557384 4775 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557393 4775 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557405 4775 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557418 4775 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557428 4775 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557437 4775 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557447 4775 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557456 4775 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557467 4775 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557476 4775 flags.go:64] FLAG: --cgroup-root="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557485 4775 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557494 4775 flags.go:64] FLAG: --client-ca-file="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557503 4775 flags.go:64] FLAG: --cloud-config="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557511 4775 flags.go:64] FLAG: --cloud-provider="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557521 4775 flags.go:64] FLAG: --cluster-dns="[]" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557531 4775 flags.go:64] FLAG: --cluster-domain="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557540 4775 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557550 4775 flags.go:64] FLAG: --config-dir="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557559 4775 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557569 4775 flags.go:64] FLAG: --container-log-max-files="5" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557581 4775 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557590 4775 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557599 4775 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557608 4775 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557618 4775 flags.go:64] FLAG: --contention-profiling="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557627 4775 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557636 4775 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557670 4775 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557679 4775 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557722 4775 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557731 4775 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557740 4775 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557749 4775 flags.go:64] FLAG: --enable-load-reader="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557761 4775 flags.go:64] FLAG: --enable-server="true" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.557771 4775 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559242 4775 flags.go:64] FLAG: --event-burst="100" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559255 4775 flags.go:64] FLAG: --event-qps="50" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559265 4775 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559276 4775 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559286 4775 flags.go:64] FLAG: --eviction-hard="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559300 4775 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559310 4775 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559320 4775 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559330 4775 flags.go:64] FLAG: --eviction-soft="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559339 4775 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559348 4775 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559358 4775 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559367 4775 flags.go:64] FLAG: --experimental-mounter-path="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559377 4775 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559386 4775 flags.go:64] FLAG: --fail-swap-on="true" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559396 4775 flags.go:64] FLAG: --feature-gates="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559410 4775 flags.go:64] FLAG: --file-check-frequency="20s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559419 4775 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559429 4775 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559439 4775 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559449 4775 flags.go:64] FLAG: --healthz-port="10248" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559459 4775 flags.go:64] FLAG: --help="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559468 4775 flags.go:64] FLAG: --hostname-override="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559478 4775 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559487 4775 flags.go:64] FLAG: --http-check-frequency="20s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559497 4775 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559506 4775 flags.go:64] FLAG: --image-credential-provider-config="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559516 4775 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559525 4775 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559534 4775 flags.go:64] FLAG: --image-service-endpoint="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559543 4775 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559553 4775 flags.go:64] FLAG: --kube-api-burst="100" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559563 4775 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559572 4775 flags.go:64] FLAG: --kube-api-qps="50" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559586 4775 flags.go:64] FLAG: --kube-reserved="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559596 4775 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559605 4775 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559615 4775 flags.go:64] FLAG: --kubelet-cgroups="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559624 4775 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559633 4775 flags.go:64] FLAG: --lock-file="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559642 4775 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559679 4775 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559690 4775 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559703 4775 flags.go:64] FLAG: --log-json-split-stream="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559712 4775 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559722 4775 flags.go:64] FLAG: --log-text-split-stream="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559731 4775 flags.go:64] FLAG: --logging-format="text" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559740 4775 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559750 4775 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559759 4775 flags.go:64] FLAG: --manifest-url="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559768 4775 flags.go:64] FLAG: --manifest-url-header="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559780 4775 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559790 4775 flags.go:64] FLAG: --max-open-files="1000000" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559801 4775 flags.go:64] FLAG: --max-pods="110" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559811 4775 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559820 4775 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559829 4775 flags.go:64] FLAG: --memory-manager-policy="None" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559838 4775 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559849 4775 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559858 4775 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559867 4775 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559891 4775 flags.go:64] FLAG: --node-status-max-images="50" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559900 4775 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559909 4775 flags.go:64] FLAG: --oom-score-adj="-999" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559919 4775 flags.go:64] FLAG: --pod-cidr="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559928 4775 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559943 4775 flags.go:64] FLAG: --pod-manifest-path="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559952 4775 flags.go:64] FLAG: --pod-max-pids="-1" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559961 4775 flags.go:64] FLAG: --pods-per-core="0" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559971 4775 flags.go:64] FLAG: --port="10250" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559981 4775 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.559992 4775 flags.go:64] FLAG: --provider-id="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560001 4775 flags.go:64] FLAG: --qos-reserved="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560011 4775 flags.go:64] FLAG: --read-only-port="10255" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560021 4775 flags.go:64] FLAG: --register-node="true" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560033 4775 flags.go:64] FLAG: --register-schedulable="true" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560043 4775 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560060 4775 flags.go:64] FLAG: --registry-burst="10" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560069 4775 flags.go:64] FLAG: --registry-qps="5" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560079 4775 flags.go:64] FLAG: --reserved-cpus="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560088 4775 flags.go:64] FLAG: --reserved-memory="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560100 4775 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560109 4775 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560118 4775 flags.go:64] FLAG: --rotate-certificates="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560127 4775 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560136 4775 flags.go:64] FLAG: --runonce="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560145 4775 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560154 4775 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560164 4775 flags.go:64] FLAG: --seccomp-default="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560173 4775 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560182 4775 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560191 4775 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560201 4775 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560210 4775 flags.go:64] FLAG: --storage-driver-password="root" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560220 4775 flags.go:64] FLAG: --storage-driver-secure="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560229 4775 flags.go:64] FLAG: --storage-driver-table="stats" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560238 4775 flags.go:64] FLAG: --storage-driver-user="root" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560247 4775 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560256 4775 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560266 4775 flags.go:64] FLAG: --system-cgroups="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560275 4775 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560289 4775 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560297 4775 flags.go:64] FLAG: --tls-cert-file="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560306 4775 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560318 4775 flags.go:64] FLAG: --tls-min-version="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560334 4775 flags.go:64] FLAG: --tls-private-key-file="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560344 4775 flags.go:64] FLAG: --topology-manager-policy="none" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560354 4775 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560363 4775 flags.go:64] FLAG: --topology-manager-scope="container" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560373 4775 flags.go:64] FLAG: --v="2" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560386 4775 flags.go:64] FLAG: --version="false" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560398 4775 flags.go:64] FLAG: --vmodule="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560409 4775 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.560419 4775 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560703 4775 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560715 4775 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560725 4775 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560733 4775 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560742 4775 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560751 4775 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560760 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560769 4775 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560777 4775 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560786 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560796 4775 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560805 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560813 4775 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560822 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560831 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560840 4775 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560848 4775 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560858 4775 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560866 4775 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560874 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560884 4775 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560894 4775 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560903 4775 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560916 4775 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560926 4775 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560934 4775 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560942 4775 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560950 4775 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560960 4775 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560968 4775 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560978 4775 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560988 4775 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.560996 4775 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561004 4775 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561012 4775 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561020 4775 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561029 4775 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561038 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561046 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561054 4775 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561061 4775 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561069 4775 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561084 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561092 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561100 4775 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561108 4775 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561115 4775 feature_gate.go:330] unrecognized feature gate: Example Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561123 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561131 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561139 4775 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561147 4775 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561154 4775 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561162 4775 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561170 4775 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561178 4775 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561193 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561201 4775 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561208 4775 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561217 4775 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561224 4775 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561232 4775 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561240 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561247 4775 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561255 4775 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561267 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561275 4775 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561283 4775 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561290 4775 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561298 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561308 4775 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.561318 4775 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.562328 4775 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.576419 4775 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.576476 4775 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576639 4775 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576695 4775 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576708 4775 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576719 4775 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576729 4775 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576738 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576747 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576757 4775 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576765 4775 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576774 4775 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576782 4775 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576793 4775 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576805 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576814 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576823 4775 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576830 4775 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576838 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576846 4775 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576855 4775 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576876 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576884 4775 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576892 4775 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576899 4775 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576908 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576917 4775 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576927 4775 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576937 4775 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576946 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576955 4775 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576964 4775 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576972 4775 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576980 4775 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576987 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.576999 4775 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577010 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577020 4775 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577029 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577040 4775 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577050 4775 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577059 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577081 4775 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577092 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577101 4775 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577109 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577118 4775 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577126 4775 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577135 4775 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577143 4775 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577150 4775 feature_gate.go:330] unrecognized feature gate: Example Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577159 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577166 4775 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577174 4775 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577182 4775 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577190 4775 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577198 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577217 4775 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577226 4775 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577234 4775 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577242 4775 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577250 4775 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577258 4775 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577265 4775 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577273 4775 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577281 4775 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577289 4775 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577297 4775 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577305 4775 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577312 4775 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577320 4775 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577328 4775 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577336 4775 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.577351 4775 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577624 4775 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577639 4775 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577673 4775 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577684 4775 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577718 4775 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577730 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577741 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577752 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577763 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577773 4775 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577782 4775 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577791 4775 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577801 4775 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577810 4775 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577820 4775 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577834 4775 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577848 4775 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577858 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577868 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577893 4775 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577902 4775 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577910 4775 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577917 4775 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577927 4775 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577935 4775 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577943 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577951 4775 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577960 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577967 4775 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577975 4775 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577983 4775 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577991 4775 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.577998 4775 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578006 4775 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578013 4775 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578021 4775 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578029 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578040 4775 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578049 4775 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578058 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578066 4775 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578074 4775 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578082 4775 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578089 4775 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578098 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578106 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578114 4775 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578122 4775 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578130 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578138 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578146 4775 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578154 4775 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578165 4775 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578175 4775 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578185 4775 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578214 4775 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578223 4775 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578231 4775 feature_gate.go:330] unrecognized feature gate: Example Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578242 4775 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578251 4775 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578261 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578269 4775 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578277 4775 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578285 4775 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578294 4775 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578302 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578310 4775 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578319 4775 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578328 4775 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578338 4775 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.578348 4775 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.578364 4775 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.579783 4775 server.go:940] "Client rotation is on, will bootstrap in background" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.586244 4775 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.586400 4775 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.588354 4775 server.go:997] "Starting client certificate rotation" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.588426 4775 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.588722 4775 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-29 22:42:40.726238697 +0000 UTC Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.588862 4775 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 99h9m2.137381746s for next certificate rotation Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.616325 4775 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.619541 4775 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.641362 4775 log.go:25] "Validated CRI v1 runtime API" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.684530 4775 log.go:25] "Validated CRI v1 image API" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.687440 4775 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.694120 4775 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-25-19-28-44-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.694175 4775 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.723093 4775 manager.go:217] Machine: {Timestamp:2025-11-25 19:33:38.719681461 +0000 UTC m=+0.636043877 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:4bfe9575-225a-4848-84aa-a2e7c416ae57 BootID:1976b9c3-06ba-426e-8e28-5609feece292 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5f:05:37 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5f:05:37 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:97:e9:d6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e9:4e:0f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a9:5d:89 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:9b:d7:bf Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:9c:0a:c9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:52:b6:18:4f:37:20 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:66:f0:6d:b7:fd:6c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.723528 4775 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.723776 4775 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.725458 4775 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.725893 4775 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.725957 4775 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.726306 4775 topology_manager.go:138] "Creating topology manager with none policy" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.726324 4775 container_manager_linux.go:303] "Creating device plugin manager" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.726967 4775 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.727016 4775 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.727729 4775 state_mem.go:36] "Initialized new in-memory state store" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.727889 4775 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.731741 4775 kubelet.go:418] "Attempting to sync node with API server" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.731778 4775 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.731804 4775 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.731827 4775 kubelet.go:324] "Adding apiserver pod source" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.731847 4775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.739145 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:38 crc kubenswrapper[4775]: E1125 19:33:38.739275 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.739371 4775 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.739358 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:38 crc kubenswrapper[4775]: E1125 19:33:38.739534 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.740613 4775 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.743487 4775 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.744861 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.744893 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.744901 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.744912 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.744929 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.744940 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.744950 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.744965 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.744977 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.744987 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.745000 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.745009 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.747066 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.747597 4775 server.go:1280] "Started kubelet" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.747997 4775 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.748300 4775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 25 19:33:38 crc systemd[1]: Started Kubernetes Kubelet. Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.752213 4775 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.752753 4775 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.756752 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.756823 4775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 25 19:33:38 crc kubenswrapper[4775]: E1125 19:33:38.764439 4775 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.764480 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:27:02.760552012 +0000 UTC Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.764583 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 779h53m23.995974333s for next certificate rotation Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.764460 4775 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.765139 4775 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.765327 4775 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 25 19:33:38 crc kubenswrapper[4775]: E1125 19:33:38.765121 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="200ms" Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.765430 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:38 crc kubenswrapper[4775]: E1125 19:33:38.765545 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.765900 4775 server.go:460] "Adding debug handlers to kubelet server" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.767594 4775 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.767824 4775 factory.go:55] Registering systemd factory Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.767997 4775 factory.go:221] Registration of the systemd container factory successfully Nov 25 19:33:38 crc kubenswrapper[4775]: E1125 19:33:38.765735 4775 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.248:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b56e491fe0e72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 19:33:38.747559538 +0000 UTC m=+0.663921914,LastTimestamp:2025-11-25 19:33:38.747559538 +0000 UTC m=+0.663921914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.768738 4775 factory.go:153] Registering CRI-O factory Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.769289 4775 factory.go:221] Registration of the crio container factory successfully Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.769341 4775 factory.go:103] Registering Raw factory Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.769368 4775 manager.go:1196] Started watching for new ooms in manager Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.772099 4775 manager.go:319] Starting recovery of all containers Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774521 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774583 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774599 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774615 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774627 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774639 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774677 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774691 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774706 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774723 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774736 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774751 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774765 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774778 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774790 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774802 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774817 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774829 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774841 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774854 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774869 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774883 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774896 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774909 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774921 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774934 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774948 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774972 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774985 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.774995 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775005 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775018 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775029 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775041 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775054 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775065 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775078 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775090 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775103 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775114 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775126 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775139 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775151 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775164 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775177 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775188 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775199 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775210 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775222 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775233 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775245 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775257 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775274 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775286 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775300 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775313 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775326 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775340 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775352 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775366 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775380 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775394 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775406 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775420 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775434 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775448 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775460 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775472 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775484 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775495 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775510 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775523 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775536 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775551 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775565 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775577 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775590 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775602 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775612 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775624 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775635 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775662 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775675 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775687 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775698 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775708 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775720 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775732 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775743 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775756 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775767 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775781 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775796 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775808 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775821 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775833 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775846 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775859 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775871 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775883 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775896 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775911 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775923 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775936 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775960 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775975 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.775989 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776002 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776015 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776029 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776041 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776054 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776066 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776079 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776091 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776108 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776121 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776132 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776142 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776152 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776164 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776177 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776194 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776205 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776216 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776227 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776237 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776248 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776259 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776272 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776283 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776293 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776305 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776318 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776330 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776341 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776362 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776375 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776388 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776402 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776414 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776426 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776438 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776452 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776464 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776476 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776489 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776501 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776514 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776526 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776558 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776570 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776581 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776595 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776612 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776625 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776637 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776666 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776679 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776691 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776703 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776717 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776729 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776740 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776751 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776762 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776773 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776784 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776797 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776811 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776823 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776840 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776854 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776866 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776879 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776891 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776904 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776918 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776930 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776941 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776953 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776965 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776978 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.776990 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777002 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777016 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777032 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777045 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777057 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777069 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777081 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777092 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777105 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777118 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777131 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777143 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777161 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777173 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777185 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777200 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777213 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777228 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.777241 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.779360 4775 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.779388 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.779401 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.779412 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.779424 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.779436 4775 reconstruct.go:97] "Volume reconstruction finished" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.779446 4775 reconciler.go:26] "Reconciler: start to sync state" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.813122 4775 manager.go:324] Recovery completed Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.825846 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.827861 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.827940 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.827959 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.829258 4775 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.829283 4775 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.829328 4775 state_mem.go:36] "Initialized new in-memory state store" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.842860 4775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.845575 4775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.845702 4775 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.845763 4775 kubelet.go:2335] "Starting kubelet main sync loop" Nov 25 19:33:38 crc kubenswrapper[4775]: E1125 19:33:38.845907 4775 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 25 19:33:38 crc kubenswrapper[4775]: W1125 19:33:38.847698 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:38 crc kubenswrapper[4775]: E1125 19:33:38.847834 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.848637 4775 policy_none.go:49] "None policy: Start" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.850367 4775 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.850436 4775 state_mem.go:35] "Initializing new in-memory state store" Nov 25 19:33:38 crc kubenswrapper[4775]: E1125 19:33:38.864971 4775 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.913274 4775 manager.go:334] "Starting Device Plugin manager" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.913357 4775 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.913379 4775 server.go:79] "Starting device plugin registration server" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.914077 4775 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.914103 4775 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.914728 4775 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.914990 4775 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.915015 4775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 25 19:33:38 crc kubenswrapper[4775]: E1125 19:33:38.928327 4775 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.946716 4775 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.946880 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.948714 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.948789 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.948809 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.949112 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.949573 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.949711 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.950959 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.951007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.951030 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.951170 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.951264 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.951309 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.951363 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.951319 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.951430 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.953584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.953662 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.953678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.953584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.953810 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.953838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.954056 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.954246 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.954308 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.955510 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.955553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.955612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.955617 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.955725 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.955743 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.955968 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.956077 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.956130 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.957091 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.957127 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.957138 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.959530 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.959573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.959588 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.959843 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.959881 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.961207 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.961258 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.961278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:38 crc kubenswrapper[4775]: E1125 19:33:38.966779 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="400ms" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982334 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982422 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982466 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982499 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982532 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982562 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982589 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982617 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982645 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982717 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982747 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982778 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982808 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982843 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:38 crc kubenswrapper[4775]: I1125 19:33:38.982869 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.014392 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.015557 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.015603 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.015616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.015667 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 19:33:39 crc kubenswrapper[4775]: E1125 19:33:39.016296 4775 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.248:6443: connect: connection refused" node="crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.084596 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.084690 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.084730 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.084792 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.084836 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.084892 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.084930 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.084875 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.084958 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.084982 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085001 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085003 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085063 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085077 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085101 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085111 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085140 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085150 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085184 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085181 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085221 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085229 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085246 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085267 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085287 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085320 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085352 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085381 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085411 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.085268 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.217054 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.219583 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.219682 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.219705 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.219746 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 19:33:39 crc kubenswrapper[4775]: E1125 19:33:39.220295 4775 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.248:6443: connect: connection refused" node="crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.288570 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.310222 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.330735 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: W1125 19:33:39.353102 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-fe3287f9b957b342c8ed57d6ef9c336499079f30405d53dd7ff1ee6cd14e62c1 WatchSource:0}: Error finding container fe3287f9b957b342c8ed57d6ef9c336499079f30405d53dd7ff1ee6cd14e62c1: Status 404 returned error can't find the container with id fe3287f9b957b342c8ed57d6ef9c336499079f30405d53dd7ff1ee6cd14e62c1 Nov 25 19:33:39 crc kubenswrapper[4775]: W1125 19:33:39.354389 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1abfa8981564ce39f902027ec72ca45bce33bc58a3b158f246a6c5ecf7338304 WatchSource:0}: Error finding container 1abfa8981564ce39f902027ec72ca45bce33bc58a3b158f246a6c5ecf7338304: Status 404 returned error can't find the container with id 1abfa8981564ce39f902027ec72ca45bce33bc58a3b158f246a6c5ecf7338304 Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.356975 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: W1125 19:33:39.360466 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-dc3bf92d8606e429ad0104862dbc0207d822357e5dc932a0e51c622a94ec1b02 WatchSource:0}: Error finding container dc3bf92d8606e429ad0104862dbc0207d822357e5dc932a0e51c622a94ec1b02: Status 404 returned error can't find the container with id dc3bf92d8606e429ad0104862dbc0207d822357e5dc932a0e51c622a94ec1b02 Nov 25 19:33:39 crc kubenswrapper[4775]: E1125 19:33:39.367502 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="800ms" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.367804 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 19:33:39 crc kubenswrapper[4775]: W1125 19:33:39.378487 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-d6b6ce00fd2b0176c414b5e3052c225d30725f4ee73c07ac26e9ae87ec88162b WatchSource:0}: Error finding container d6b6ce00fd2b0176c414b5e3052c225d30725f4ee73c07ac26e9ae87ec88162b: Status 404 returned error can't find the container with id d6b6ce00fd2b0176c414b5e3052c225d30725f4ee73c07ac26e9ae87ec88162b Nov 25 19:33:39 crc kubenswrapper[4775]: W1125 19:33:39.401166 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-fcf3e5e70c6ad1d0a4d3933ef4dff9db68dda23d3b68b5527c297b8c754cb147 WatchSource:0}: Error finding container fcf3e5e70c6ad1d0a4d3933ef4dff9db68dda23d3b68b5527c297b8c754cb147: Status 404 returned error can't find the container with id fcf3e5e70c6ad1d0a4d3933ef4dff9db68dda23d3b68b5527c297b8c754cb147 Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.620773 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.623453 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.623524 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.623544 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.623610 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 19:33:39 crc kubenswrapper[4775]: E1125 19:33:39.624288 4775 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.248:6443: connect: connection refused" node="crc" Nov 25 19:33:39 crc kubenswrapper[4775]: W1125 19:33:39.646343 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:39 crc kubenswrapper[4775]: E1125 19:33:39.646484 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.754825 4775 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.852785 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d6b6ce00fd2b0176c414b5e3052c225d30725f4ee73c07ac26e9ae87ec88162b"} Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.855088 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"dc3bf92d8606e429ad0104862dbc0207d822357e5dc932a0e51c622a94ec1b02"} Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.856968 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fe3287f9b957b342c8ed57d6ef9c336499079f30405d53dd7ff1ee6cd14e62c1"} Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.864683 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1abfa8981564ce39f902027ec72ca45bce33bc58a3b158f246a6c5ecf7338304"} Nov 25 19:33:39 crc kubenswrapper[4775]: I1125 19:33:39.866644 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fcf3e5e70c6ad1d0a4d3933ef4dff9db68dda23d3b68b5527c297b8c754cb147"} Nov 25 19:33:39 crc kubenswrapper[4775]: W1125 19:33:39.871835 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:39 crc kubenswrapper[4775]: E1125 19:33:39.871943 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" Nov 25 19:33:39 crc kubenswrapper[4775]: W1125 19:33:39.947080 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:39 crc kubenswrapper[4775]: E1125 19:33:39.947220 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" Nov 25 19:33:40 crc kubenswrapper[4775]: W1125 19:33:40.006077 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:40 crc kubenswrapper[4775]: E1125 19:33:40.006177 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" Nov 25 19:33:40 crc kubenswrapper[4775]: E1125 19:33:40.169145 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="1.6s" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.425395 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.427300 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.427345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.427360 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.427388 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 19:33:40 crc kubenswrapper[4775]: E1125 19:33:40.428080 4775 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.248:6443: connect: connection refused" node="crc" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.754356 4775 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.873052 4775 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf" exitCode=0 Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.873162 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.873249 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf"} Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.874834 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.874938 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.874969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.876588 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b" exitCode=0 Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.876760 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b"} Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.876823 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.878355 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.878409 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.878427 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.880902 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae"} Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.881004 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28"} Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.881026 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71"} Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.886689 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32" exitCode=0 Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.886781 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32"} Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.886956 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.889519 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.889589 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.889617 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.890460 4775 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2cadcba6e5655aa13a8aacfd44314fb9d68093e60136930bfed3d4812da3f451" exitCode=0 Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.890511 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2cadcba6e5655aa13a8aacfd44314fb9d68093e60136930bfed3d4812da3f451"} Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.890738 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.896901 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.896957 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.896977 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.897445 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.899514 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.899591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:40 crc kubenswrapper[4775]: I1125 19:33:40.899614 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:41 crc kubenswrapper[4775]: W1125 19:33:41.268902 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:41 crc kubenswrapper[4775]: E1125 19:33:41.269023 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.753708 4775 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:41 crc kubenswrapper[4775]: E1125 19:33:41.770569 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="3.2s" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.895994 4775 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="29ac9241535bcd3e169a83d8925c8f154e4e67487c3eb81de7b57faffdb48912" exitCode=0 Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.896160 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"29ac9241535bcd3e169a83d8925c8f154e4e67487c3eb81de7b57faffdb48912"} Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.896164 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.897598 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.897637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.897676 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.899056 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ad64a22adbab6e2dfb0a2b3491957bf199625f65eb944136f9e74100ca4323a2"} Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.899115 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.900215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.900250 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.900266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.901810 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70"} Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.901844 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136"} Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.904994 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab"} Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.905079 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.906402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.906440 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.906453 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.911268 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1"} Nov 25 19:33:41 crc kubenswrapper[4775]: I1125 19:33:41.911310 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210"} Nov 25 19:33:41 crc kubenswrapper[4775]: W1125 19:33:41.997923 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:41 crc kubenswrapper[4775]: E1125 19:33:41.998045 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.029150 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.030721 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.030757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.030767 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.030792 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 19:33:42 crc kubenswrapper[4775]: E1125 19:33:42.031288 4775 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.248:6443: connect: connection refused" node="crc" Nov 25 19:33:42 crc kubenswrapper[4775]: W1125 19:33:42.316494 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:42 crc kubenswrapper[4775]: E1125 19:33:42.316628 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" Nov 25 19:33:42 crc kubenswrapper[4775]: W1125 19:33:42.360140 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:42 crc kubenswrapper[4775]: E1125 19:33:42.360235 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.248:6443: connect: connection refused" logger="UnhandledError" Nov 25 19:33:42 crc kubenswrapper[4775]: E1125 19:33:42.749692 4775 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.248:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b56e491fe0e72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 19:33:38.747559538 +0000 UTC m=+0.663921914,LastTimestamp:2025-11-25 19:33:38.747559538 +0000 UTC m=+0.663921914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.754246 4775 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.248:6443: connect: connection refused Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.916999 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"07f3bfc26632516442b79199b4f205bcde568ac3c73dac5b3b4191f101732389"} Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.917076 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692"} Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.917097 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9"} Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.917160 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.919100 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.919160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.919183 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.921567 4775 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="645dff9a8fccef3d56245f3c47b3aab0e94a5e269873a7b93f55175978cd7cd9" exitCode=0 Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.921601 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"645dff9a8fccef3d56245f3c47b3aab0e94a5e269873a7b93f55175978cd7cd9"} Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.921738 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.922838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.922895 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.922921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.927915 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e"} Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.928039 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.928072 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.928081 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.929394 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.929438 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.929457 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.929548 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.929585 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.929601 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.929597 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.929680 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:42 crc kubenswrapper[4775]: I1125 19:33:42.929702 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.477934 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.937168 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3726934f4cbdc4669af2954e1b4813a9851788e02f7d3baa24faced757240c21"} Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.937231 4775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.937256 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"46fb4ca2acf1b226629b57a57218281ceb66f604fc7c1b10139656d09698cdae"} Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.937280 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d7b1f05791a83e3e1ae935c58c1330f93283071cc81a8f4f63a4497a9274d0fd"} Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.937315 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.937319 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.937385 4775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.937477 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.939467 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.939478 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.939530 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.939531 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.939559 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.939688 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.939767 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.939797 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:43 crc kubenswrapper[4775]: I1125 19:33:43.939814 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:44 crc kubenswrapper[4775]: I1125 19:33:44.314018 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:44 crc kubenswrapper[4775]: I1125 19:33:44.947228 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e0253896f15dec473ed89fa40f83df1e69318db894ecc38ad1ace4f387f214bb"} Nov 25 19:33:44 crc kubenswrapper[4775]: I1125 19:33:44.947311 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"156455cd2f2d37d657bfd0395ff871734f6dce127f45f2c8e0219c4e2d439f5f"} Nov 25 19:33:44 crc kubenswrapper[4775]: I1125 19:33:44.947249 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:44 crc kubenswrapper[4775]: I1125 19:33:44.947351 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:44 crc kubenswrapper[4775]: I1125 19:33:44.949102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:44 crc kubenswrapper[4775]: I1125 19:33:44.949150 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:44 crc kubenswrapper[4775]: I1125 19:33:44.949108 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:44 crc kubenswrapper[4775]: I1125 19:33:44.949212 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:44 crc kubenswrapper[4775]: I1125 19:33:44.949239 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:44 crc kubenswrapper[4775]: I1125 19:33:44.949169 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.231834 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.233833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.233885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.233906 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.233942 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.459891 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.460156 4775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.460220 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.462065 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.462129 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.462148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.896813 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.950291 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.951799 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.951855 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.951867 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.981596 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.981822 4775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.981880 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.983429 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.983482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:45 crc kubenswrapper[4775]: I1125 19:33:45.983500 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.217830 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.218094 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.219602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.219704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.219728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.508106 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.521525 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.953721 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.953862 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.959617 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.959722 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.959752 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.959784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.959840 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:46 crc kubenswrapper[4775]: I1125 19:33:46.959860 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:47 crc kubenswrapper[4775]: I1125 19:33:47.314727 4775 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 19:33:47 crc kubenswrapper[4775]: I1125 19:33:47.314879 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 19:33:47 crc kubenswrapper[4775]: I1125 19:33:47.750232 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:47 crc kubenswrapper[4775]: I1125 19:33:47.750624 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:47 crc kubenswrapper[4775]: I1125 19:33:47.752591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:47 crc kubenswrapper[4775]: I1125 19:33:47.752665 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:47 crc kubenswrapper[4775]: I1125 19:33:47.752675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:47 crc kubenswrapper[4775]: I1125 19:33:47.956234 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:47 crc kubenswrapper[4775]: I1125 19:33:47.957722 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:47 crc kubenswrapper[4775]: I1125 19:33:47.957797 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:47 crc kubenswrapper[4775]: I1125 19:33:47.957818 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:48 crc kubenswrapper[4775]: E1125 19:33:48.928695 4775 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 19:33:49 crc kubenswrapper[4775]: I1125 19:33:49.849749 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:49 crc kubenswrapper[4775]: I1125 19:33:49.850088 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:49 crc kubenswrapper[4775]: I1125 19:33:49.852332 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:49 crc kubenswrapper[4775]: I1125 19:33:49.852390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:49 crc kubenswrapper[4775]: I1125 19:33:49.852410 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:49 crc kubenswrapper[4775]: I1125 19:33:49.858608 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:49 crc kubenswrapper[4775]: I1125 19:33:49.963436 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:49 crc kubenswrapper[4775]: I1125 19:33:49.965267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:49 crc kubenswrapper[4775]: I1125 19:33:49.965340 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:49 crc kubenswrapper[4775]: I1125 19:33:49.965365 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.016786 4775 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37564->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.016873 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37564->192.168.126.11:17697: read: connection reset by peer" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.085106 4775 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.085193 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.091795 4775 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.091884 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.488347 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.488532 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.490051 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.490113 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.490135 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.978542 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.981230 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="07f3bfc26632516442b79199b4f205bcde568ac3c73dac5b3b4191f101732389" exitCode=255 Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.981280 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"07f3bfc26632516442b79199b4f205bcde568ac3c73dac5b3b4191f101732389"} Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.981521 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.982725 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.982755 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.982766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:53 crc kubenswrapper[4775]: I1125 19:33:53.983283 4775 scope.go:117] "RemoveContainer" containerID="07f3bfc26632516442b79199b4f205bcde568ac3c73dac5b3b4191f101732389" Nov 25 19:33:54 crc kubenswrapper[4775]: I1125 19:33:54.989487 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 19:33:54 crc kubenswrapper[4775]: I1125 19:33:54.992794 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89"} Nov 25 19:33:54 crc kubenswrapper[4775]: I1125 19:33:54.993083 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:54 crc kubenswrapper[4775]: I1125 19:33:54.994698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:54 crc kubenswrapper[4775]: I1125 19:33:54.994786 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:54 crc kubenswrapper[4775]: I1125 19:33:54.994809 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:55 crc kubenswrapper[4775]: I1125 19:33:55.469951 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:55 crc kubenswrapper[4775]: I1125 19:33:55.990151 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:55 crc kubenswrapper[4775]: I1125 19:33:55.996234 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:55 crc kubenswrapper[4775]: I1125 19:33:55.996384 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:33:55 crc kubenswrapper[4775]: I1125 19:33:55.997887 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:55 crc kubenswrapper[4775]: I1125 19:33:55.997943 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:55 crc kubenswrapper[4775]: I1125 19:33:55.997967 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:56 crc kubenswrapper[4775]: I1125 19:33:56.564272 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 25 19:33:56 crc kubenswrapper[4775]: I1125 19:33:56.564527 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:56 crc kubenswrapper[4775]: I1125 19:33:56.566974 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:56 crc kubenswrapper[4775]: I1125 19:33:56.567039 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:56 crc kubenswrapper[4775]: I1125 19:33:56.567064 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:56 crc kubenswrapper[4775]: I1125 19:33:56.587191 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 25 19:33:56 crc kubenswrapper[4775]: I1125 19:33:56.999844 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:57 crc kubenswrapper[4775]: I1125 19:33:56.999981 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:33:57 crc kubenswrapper[4775]: I1125 19:33:57.001492 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:57 crc kubenswrapper[4775]: I1125 19:33:57.001540 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:57 crc kubenswrapper[4775]: I1125 19:33:57.001557 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:57 crc kubenswrapper[4775]: I1125 19:33:57.003566 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:33:57 crc kubenswrapper[4775]: I1125 19:33:57.003681 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:33:57 crc kubenswrapper[4775]: I1125 19:33:57.003700 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:33:57 crc kubenswrapper[4775]: I1125 19:33:57.314903 4775 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 19:33:57 crc kubenswrapper[4775]: I1125 19:33:57.314999 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.075077 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.080855 4775 trace.go:236] Trace[1645772618]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 19:33:47.400) (total time: 10680ms): Nov 25 19:33:58 crc kubenswrapper[4775]: Trace[1645772618]: ---"Objects listed" error: 10679ms (19:33:58.080) Nov 25 19:33:58 crc kubenswrapper[4775]: Trace[1645772618]: [10.680048226s] [10.680048226s] END Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.080896 4775 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.084931 4775 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.087009 4775 trace.go:236] Trace[1692502691]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 19:33:46.274) (total time: 11812ms): Nov 25 19:33:58 crc kubenswrapper[4775]: Trace[1692502691]: ---"Objects listed" error: 11812ms (19:33:58.086) Nov 25 19:33:58 crc kubenswrapper[4775]: Trace[1692502691]: [11.812836194s] [11.812836194s] END Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.087064 4775 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.087135 4775 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.088310 4775 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.089630 4775 trace.go:236] Trace[1100488319]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 19:33:47.973) (total time: 10115ms): Nov 25 19:33:58 crc kubenswrapper[4775]: Trace[1100488319]: ---"Objects listed" error: 10115ms (19:33:58.089) Nov 25 19:33:58 crc kubenswrapper[4775]: Trace[1100488319]: [10.115547627s] [10.115547627s] END Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.089714 4775 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.745136 4775 apiserver.go:52] "Watching apiserver" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.749585 4775 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.750026 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.750565 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.750977 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.751005 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.751154 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.751177 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.751294 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.751976 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.752044 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.752062 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.753091 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.753985 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.754867 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.755068 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.755602 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.755835 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.757003 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.757297 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.757593 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.767245 4775 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.789687 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.790868 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.790930 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.790972 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791012 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791067 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791103 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791139 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791175 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791214 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791252 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791294 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791329 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791367 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791370 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791414 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791459 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791528 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791596 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791602 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791677 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791733 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791790 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791839 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791888 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791941 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791921 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.791991 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792052 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792100 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792110 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792156 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792296 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792339 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792381 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792386 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792418 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792456 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792497 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792533 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792541 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792567 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792790 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792865 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.792932 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793000 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793115 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793169 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793127 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793237 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793298 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793365 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793428 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793486 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793545 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793601 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793699 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793561 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793762 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793822 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793841 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793886 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793951 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.794007 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.794064 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.794120 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.794177 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.794231 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.794284 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.794349 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.794404 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.794458 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.794512 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.794570 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.794629 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.795287 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.795363 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.795430 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.795498 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.795713 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.795775 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.796215 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.796336 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.796398 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.796452 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.796510 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.796570 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.796626 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.796722 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.796788 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.796843 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.796899 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.796973 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.797025 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.797079 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.797132 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.797188 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.797278 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.797417 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793887 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.797473 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.797459 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.793933 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.797547 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.798046 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.798265 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.798881 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.798936 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.799206 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.800960 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.800965 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.795130 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.795432 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.797268 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.797356 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.801068 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.801492 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.801724 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.801782 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.802253 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.802433 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.802888 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.803339 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.804479 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.805080 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.808609 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.810059 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.810167 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.810459 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.810679 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.810895 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.811407 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.797536 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812074 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812116 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812124 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812203 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812232 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812262 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812301 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812331 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812358 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812444 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812578 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812605 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812631 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812673 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812697 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812722 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812745 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812767 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812794 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812819 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812846 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812872 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812898 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812923 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812946 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.812981 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813002 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813026 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813049 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813106 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813132 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813157 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813179 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813203 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813232 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813256 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813280 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813302 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813327 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813352 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813375 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813401 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813425 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813455 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813480 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813484 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813506 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813537 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813558 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813581 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813697 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813721 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813745 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813766 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813790 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.832763 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.832840 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.832881 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.832917 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.832952 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833001 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833032 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833064 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833094 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833125 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833156 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833191 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833226 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833257 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833289 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833320 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833350 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833381 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833412 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833450 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833482 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833513 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833543 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833576 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833608 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833640 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833713 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833747 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833777 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833805 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833833 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833862 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833891 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833924 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833953 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.833979 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834009 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834039 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834067 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834104 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834134 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834163 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834191 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834218 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834247 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834277 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834307 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834336 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834367 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834395 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834430 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834457 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834484 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834583 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834633 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834689 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834723 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834758 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834789 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834825 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834856 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834889 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834920 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834949 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.834978 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835009 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835041 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835163 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835182 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835199 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835216 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835233 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835251 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835268 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835284 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835300 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835316 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835332 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835348 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835366 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835383 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835400 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835416 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835432 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835447 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835463 4775 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835478 4775 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835494 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835511 4775 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835526 4775 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835540 4775 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835557 4775 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835572 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835588 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835604 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835619 4775 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835634 4775 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835668 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835684 4775 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835699 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835714 4775 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835730 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835747 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835761 4775 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835776 4775 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835791 4775 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835807 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835823 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835841 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835862 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835878 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.835893 4775 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.842560 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.794734 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813560 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.813915 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.814218 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.814446 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.814870 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.815081 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.815478 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.815518 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.815693 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.815982 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.816135 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.816447 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.816725 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.818279 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.819022 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.819502 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.819630 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.820611 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.821137 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.822014 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.822345 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.822747 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.822835 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.823164 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.824000 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.824099 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.824663 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.824788 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.824962 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.825160 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.825595 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.825785 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.825950 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.826289 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.826343 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.826594 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.826825 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.826861 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.827146 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.827281 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.827460 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.856100 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.827815 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.827880 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.828107 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.828159 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.828176 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.828388 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.828628 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.828782 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.829098 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.829109 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.829207 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.829374 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.838693 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.839478 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.839572 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.840193 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.840856 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.841600 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.842990 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.843020 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.843181 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.843696 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.844013 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.844098 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.844418 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.845336 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.845876 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.846214 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.847540 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.847554 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.847921 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.847999 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.848227 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.848997 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.849949 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.851210 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.853120 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.853237 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.853681 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.854058 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.854994 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.856569 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.856947 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.857051 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.857344 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.857702 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.858085 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.859269 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.859576 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.859733 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.859770 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.860569 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.860939 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.861160 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.861157 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.861175 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.861295 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.861276 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.861511 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.861591 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.861943 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.862000 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.862330 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.862343 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.862429 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.862817 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:33:59.362783697 +0000 UTC m=+21.279146263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.863188 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.863571 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.864178 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.862499 4775 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.867245 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.867712 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.867964 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.868445 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.868695 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.868773 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.868932 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.869213 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.869301 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.869943 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.870079 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.870192 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.870366 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:33:59.370338718 +0000 UTC m=+21.286701084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.870968 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.871123 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.871198 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:33:59.371171701 +0000 UTC m=+21.287534277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.871991 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.872739 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.873089 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.873177 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.873404 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.873790 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.873796 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.874089 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.874322 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.876032 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.876158 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.876994 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.877806 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.879400 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.880111 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.880584 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.881000 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.889045 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.890025 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.890848 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.890898 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.890914 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.891201 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 19:33:59.39099935 +0000 UTC m=+21.307361716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.891411 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.891743 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.891870 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.891938 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.891965 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.894676 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.895295 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.895373 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.895935 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.896597 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.897266 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.897935 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.898368 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.898474 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.898558 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.898610 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 19:33:58 crc kubenswrapper[4775]: E1125 19:33:58.898808 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 19:33:59.398702166 +0000 UTC m=+21.315064752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.899588 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.899616 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.900289 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.900398 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.900549 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.900752 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.901751 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.902994 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.903147 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.903414 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.904485 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.904779 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.908073 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.910575 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.911153 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.913575 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.917846 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.918981 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.922248 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.923909 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.923996 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.925678 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.926906 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.928991 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.929704 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.931234 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.931250 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.931851 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.933338 4775 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.933476 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.935617 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936195 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936244 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936268 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936314 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936329 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936339 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936371 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936367 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936381 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936466 4775 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936504 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936520 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936533 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936547 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936579 4775 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936594 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936606 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936620 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936669 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936684 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936698 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936713 4775 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936746 4775 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936759 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936771 4775 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936784 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936796 4775 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936829 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936843 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936855 4775 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936867 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936879 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936910 4775 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936923 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936936 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936949 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936960 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936969 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.936994 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937070 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937086 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937099 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937114 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937125 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937138 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937151 4775 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937164 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937176 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937187 4775 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937201 4775 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937213 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937226 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937242 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937256 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937270 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937284 4775 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937299 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937315 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937330 4775 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937345 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937358 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937371 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937385 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937399 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937412 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937422 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937431 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937441 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937452 4775 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937461 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937474 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937483 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937492 4775 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937503 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937512 4775 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937522 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937532 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937542 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937553 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937564 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937574 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937583 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937592 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937602 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937611 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937621 4775 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937633 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937644 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937669 4775 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937680 4775 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937694 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937707 4775 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937719 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937730 4775 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937747 4775 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937759 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937771 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937785 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937801 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937815 4775 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937828 4775 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937840 4775 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937853 4775 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937870 4775 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937883 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937898 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937912 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937925 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937938 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937950 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937962 4775 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937975 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.937987 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938000 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938013 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938027 4775 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938039 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938049 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938062 4775 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938071 4775 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938082 4775 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938092 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938101 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938112 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938122 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938133 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938143 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938153 4775 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938164 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938175 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938184 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938194 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938203 4775 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938215 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938224 4775 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938233 4775 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938242 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938252 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938261 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938271 4775 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938281 4775 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938291 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938301 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938311 4775 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938333 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938343 4775 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938354 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938365 4775 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938374 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938389 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938399 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938409 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.938418 4775 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.941642 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.944002 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.945785 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.945932 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.947138 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.948390 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.949513 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.951546 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.952086 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.952774 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.953864 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.955029 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.955612 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.955843 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.956761 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.957529 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.958963 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.959701 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.961075 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.961973 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.962607 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.963852 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.964462 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.969079 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.986735 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:58 crc kubenswrapper[4775]: I1125 19:33:58.998127 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.012459 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.031775 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.039412 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.043809 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.082188 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 19:33:59 crc kubenswrapper[4775]: W1125 19:33:59.095891 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c07e05cdf02fb47ceeefa2d884d169a3f18871487d0e0ba8bb054de0fd31ac3f WatchSource:0}: Error finding container c07e05cdf02fb47ceeefa2d884d169a3f18871487d0e0ba8bb054de0fd31ac3f: Status 404 returned error can't find the container with id c07e05cdf02fb47ceeefa2d884d169a3f18871487d0e0ba8bb054de0fd31ac3f Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.110496 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 19:33:59 crc kubenswrapper[4775]: W1125 19:33:59.123761 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-93bfd4aaf56c1dab3e6c9d622a7951ea28274ddd037a91153fb1b8cf8286e68d WatchSource:0}: Error finding container 93bfd4aaf56c1dab3e6c9d622a7951ea28274ddd037a91153fb1b8cf8286e68d: Status 404 returned error can't find the container with id 93bfd4aaf56c1dab3e6c9d622a7951ea28274ddd037a91153fb1b8cf8286e68d Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.148670 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 19:33:59 crc kubenswrapper[4775]: W1125 19:33:59.174277 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-49b945ec6c6cc675f5576282d1b1f1aeebbe39191feb893735d2077ce04902fb WatchSource:0}: Error finding container 49b945ec6c6cc675f5576282d1b1f1aeebbe39191feb893735d2077ce04902fb: Status 404 returned error can't find the container with id 49b945ec6c6cc675f5576282d1b1f1aeebbe39191feb893735d2077ce04902fb Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.442887 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.443017 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.443099 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:33:59 crc kubenswrapper[4775]: E1125 19:33:59.443180 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:34:00.443106426 +0000 UTC m=+22.359468832 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:33:59 crc kubenswrapper[4775]: E1125 19:33:59.443217 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:33:59 crc kubenswrapper[4775]: E1125 19:33:59.443283 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.443298 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:33:59 crc kubenswrapper[4775]: E1125 19:33:59.443321 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:33:59 crc kubenswrapper[4775]: E1125 19:33:59.443405 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:33:59 crc kubenswrapper[4775]: E1125 19:33:59.443411 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:33:59 crc kubenswrapper[4775]: E1125 19:33:59.443364 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:00.443339503 +0000 UTC m=+22.359701869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.443556 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:33:59 crc kubenswrapper[4775]: E1125 19:33:59.443609 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:00.443585819 +0000 UTC m=+22.359948395 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:33:59 crc kubenswrapper[4775]: E1125 19:33:59.443695 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:00.44363468 +0000 UTC m=+22.359997266 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:33:59 crc kubenswrapper[4775]: E1125 19:33:59.443728 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:33:59 crc kubenswrapper[4775]: E1125 19:33:59.443755 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:33:59 crc kubenswrapper[4775]: E1125 19:33:59.443771 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:33:59 crc kubenswrapper[4775]: E1125 19:33:59.443824 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:00.443810145 +0000 UTC m=+22.360172521 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.463438 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8p9p9"] Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.466815 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8p9p9" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.471181 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.471471 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.473159 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.504048 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.526501 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.543702 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.544161 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3099556d-7e22-4d2c-9dcc-1a8465a2bd32-hosts-file\") pod \"node-resolver-8p9p9\" (UID: \"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\") " pod="openshift-dns/node-resolver-8p9p9" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.544215 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlvth\" (UniqueName: \"kubernetes.io/projected/3099556d-7e22-4d2c-9dcc-1a8465a2bd32-kube-api-access-dlvth\") pod \"node-resolver-8p9p9\" (UID: \"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\") " pod="openshift-dns/node-resolver-8p9p9" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.580086 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.606360 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.644765 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.645160 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3099556d-7e22-4d2c-9dcc-1a8465a2bd32-hosts-file\") pod \"node-resolver-8p9p9\" (UID: \"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\") " pod="openshift-dns/node-resolver-8p9p9" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.645201 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlvth\" (UniqueName: \"kubernetes.io/projected/3099556d-7e22-4d2c-9dcc-1a8465a2bd32-kube-api-access-dlvth\") pod \"node-resolver-8p9p9\" (UID: \"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\") " pod="openshift-dns/node-resolver-8p9p9" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.645443 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3099556d-7e22-4d2c-9dcc-1a8465a2bd32-hosts-file\") pod \"node-resolver-8p9p9\" (UID: \"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\") " pod="openshift-dns/node-resolver-8p9p9" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.674669 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.681197 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlvth\" (UniqueName: \"kubernetes.io/projected/3099556d-7e22-4d2c-9dcc-1a8465a2bd32-kube-api-access-dlvth\") pod \"node-resolver-8p9p9\" (UID: \"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\") " pod="openshift-dns/node-resolver-8p9p9" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.784331 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8p9p9" Nov 25 19:33:59 crc kubenswrapper[4775]: W1125 19:33:59.796621 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3099556d_7e22_4d2c_9dcc_1a8465a2bd32.slice/crio-0e01042cdf592e2d9e891aecd9cdf89c22f6016820c60677f603b4b04ff4eeec WatchSource:0}: Error finding container 0e01042cdf592e2d9e891aecd9cdf89c22f6016820c60677f603b4b04ff4eeec: Status 404 returned error can't find the container with id 0e01042cdf592e2d9e891aecd9cdf89c22f6016820c60677f603b4b04ff4eeec Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.952955 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-w4zbm"] Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.953944 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.956262 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.959429 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.959696 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.959835 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.965788 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.970472 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 19:33:59 crc kubenswrapper[4775]: I1125 19:33:59.997419 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:33:59Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.013792 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.013895 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483"} Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.013959 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c07e05cdf02fb47ceeefa2d884d169a3f18871487d0e0ba8bb054de0fd31ac3f"} Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.018883 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.019378 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.021546 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89" exitCode=255 Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.021584 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89"} Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.021700 4775 scope.go:117] "RemoveContainer" containerID="07f3bfc26632516442b79199b4f205bcde568ac3c73dac5b3b4191f101732389" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.022802 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"49b945ec6c6cc675f5576282d1b1f1aeebbe39191feb893735d2077ce04902fb"} Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.024496 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8p9p9" event={"ID":"3099556d-7e22-4d2c-9dcc-1a8465a2bd32","Type":"ContainerStarted","Data":"0e01042cdf592e2d9e891aecd9cdf89c22f6016820c60677f603b4b04ff4eeec"} Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.028108 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.028308 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf"} Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.028335 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445"} Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.028347 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"93bfd4aaf56c1dab3e6c9d622a7951ea28274ddd037a91153fb1b8cf8286e68d"} Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.041886 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.049052 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bdb8b79f-4ccd-4606-8f27-e26301ffc656-rootfs\") pod \"machine-config-daemon-w4zbm\" (UID: \"bdb8b79f-4ccd-4606-8f27-e26301ffc656\") " pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.049094 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdb8b79f-4ccd-4606-8f27-e26301ffc656-proxy-tls\") pod \"machine-config-daemon-w4zbm\" (UID: \"bdb8b79f-4ccd-4606-8f27-e26301ffc656\") " pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.049134 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdb8b79f-4ccd-4606-8f27-e26301ffc656-mcd-auth-proxy-config\") pod \"machine-config-daemon-w4zbm\" (UID: \"bdb8b79f-4ccd-4606-8f27-e26301ffc656\") " pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.049154 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zckkq\" (UniqueName: \"kubernetes.io/projected/bdb8b79f-4ccd-4606-8f27-e26301ffc656-kube-api-access-zckkq\") pod \"machine-config-daemon-w4zbm\" (UID: \"bdb8b79f-4ccd-4606-8f27-e26301ffc656\") " pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.051560 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.063441 4775 scope.go:117] "RemoveContainer" containerID="bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.063514 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.063701 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.068020 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.087362 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.107185 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07f3bfc26632516442b79199b4f205bcde568ac3c73dac5b3b4191f101732389\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:53Z\\\",\\\"message\\\":\\\"W1125 19:33:42.426022 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 19:33:42.426496 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764099222 cert, and key in /tmp/serving-cert-1691412025/serving-signer.crt, /tmp/serving-cert-1691412025/serving-signer.key\\\\nI1125 19:33:42.746347 1 observer_polling.go:159] Starting file observer\\\\nW1125 19:33:42.749929 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 19:33:42.750116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:42.752088 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1691412025/tls.crt::/tmp/serving-cert-1691412025/tls.key\\\\\\\"\\\\nF1125 19:33:53.002976 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.119387 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.131174 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.144230 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.150425 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bdb8b79f-4ccd-4606-8f27-e26301ffc656-rootfs\") pod \"machine-config-daemon-w4zbm\" (UID: \"bdb8b79f-4ccd-4606-8f27-e26301ffc656\") " pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.150478 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdb8b79f-4ccd-4606-8f27-e26301ffc656-proxy-tls\") pod \"machine-config-daemon-w4zbm\" (UID: \"bdb8b79f-4ccd-4606-8f27-e26301ffc656\") " pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.150562 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdb8b79f-4ccd-4606-8f27-e26301ffc656-mcd-auth-proxy-config\") pod \"machine-config-daemon-w4zbm\" (UID: \"bdb8b79f-4ccd-4606-8f27-e26301ffc656\") " pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.150585 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zckkq\" (UniqueName: \"kubernetes.io/projected/bdb8b79f-4ccd-4606-8f27-e26301ffc656-kube-api-access-zckkq\") pod \"machine-config-daemon-w4zbm\" (UID: \"bdb8b79f-4ccd-4606-8f27-e26301ffc656\") " pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.151675 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bdb8b79f-4ccd-4606-8f27-e26301ffc656-rootfs\") pod \"machine-config-daemon-w4zbm\" (UID: \"bdb8b79f-4ccd-4606-8f27-e26301ffc656\") " pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.152277 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdb8b79f-4ccd-4606-8f27-e26301ffc656-mcd-auth-proxy-config\") pod \"machine-config-daemon-w4zbm\" (UID: \"bdb8b79f-4ccd-4606-8f27-e26301ffc656\") " pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.156234 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdb8b79f-4ccd-4606-8f27-e26301ffc656-proxy-tls\") pod \"machine-config-daemon-w4zbm\" (UID: \"bdb8b79f-4ccd-4606-8f27-e26301ffc656\") " pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.156951 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.166915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zckkq\" (UniqueName: \"kubernetes.io/projected/bdb8b79f-4ccd-4606-8f27-e26301ffc656-kube-api-access-zckkq\") pod \"machine-config-daemon-w4zbm\" (UID: \"bdb8b79f-4ccd-4606-8f27-e26301ffc656\") " pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.173692 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.189106 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.201619 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.213355 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.279461 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:34:00 crc kubenswrapper[4775]: W1125 19:34:00.293691 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdb8b79f_4ccd_4606_8f27_e26301ffc656.slice/crio-e3b510acc1e40e0cdcdfca8fd9b56759973c1af6c8235f375f7f610d3463ac21 WatchSource:0}: Error finding container e3b510acc1e40e0cdcdfca8fd9b56759973c1af6c8235f375f7f610d3463ac21: Status 404 returned error can't find the container with id e3b510acc1e40e0cdcdfca8fd9b56759973c1af6c8235f375f7f610d3463ac21 Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.371069 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x28tq"] Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.371816 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.375116 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.375283 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.377463 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.379945 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.380087 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.380443 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-vwq64"] Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.381040 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.381326 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.382078 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.394944 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.396738 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-8qf2w"] Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.397250 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.402880 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.403043 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.411375 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.413625 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.414183 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.414242 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.424854 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.448939 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.452870 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.452961 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovn-node-metrics-cert\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.452988 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-cni-binary-copy\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453013 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453032 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-cni-bin\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453047 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7q6f\" (UniqueName: \"kubernetes.io/projected/1b02c35a-be66-4cf6-afc0-12ddc2f74148-kube-api-access-h7q6f\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453061 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453076 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-openvswitch\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453094 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453116 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovnkube-config\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453134 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453168 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-cni-netd\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453186 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdjfm\" (UniqueName: \"kubernetes.io/projected/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-kube-api-access-tdjfm\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453211 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453230 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-ovn\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453249 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-node-log\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453263 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-cnibin\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453278 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-kubelet\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453292 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-log-socket\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453307 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovnkube-script-lib\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453322 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-slash\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453340 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-run-netns\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453358 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-systemd-units\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453378 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453396 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-etc-openvswitch\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453412 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-run-ovn-kubernetes\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453426 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-system-cni-dir\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453440 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-os-release\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453457 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-env-overrides\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453473 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-var-lib-openvswitch\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453489 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-systemd\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.453510 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.453623 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.453640 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.453672 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.453711 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:02.453696459 +0000 UTC m=+24.370058825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.453815 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:34:02.453807222 +0000 UTC m=+24.370169588 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.453854 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.453876 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:02.453869064 +0000 UTC m=+24.370231430 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.453944 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.453963 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:02.453957916 +0000 UTC m=+24.370320282 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.454050 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.454060 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.454067 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.454087 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:02.454081009 +0000 UTC m=+24.370443375 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.472441 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.490155 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07f3bfc26632516442b79199b4f205bcde568ac3c73dac5b3b4191f101732389\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:53Z\\\",\\\"message\\\":\\\"W1125 19:33:42.426022 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 19:33:42.426496 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764099222 cert, and key in /tmp/serving-cert-1691412025/serving-signer.crt, /tmp/serving-cert-1691412025/serving-signer.key\\\\nI1125 19:33:42.746347 1 observer_polling.go:159] Starting file observer\\\\nW1125 19:33:42.749929 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 19:33:42.750116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:42.752088 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1691412025/tls.crt::/tmp/serving-cert-1691412025/tls.key\\\\\\\"\\\\nF1125 19:33:53.002976 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.511525 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.527307 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.539863 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.551343 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554234 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-os-release\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554292 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppm9j\" (UniqueName: \"kubernetes.io/projected/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-kube-api-access-ppm9j\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554322 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-os-release\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554343 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-var-lib-cni-multus\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554434 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-etc-openvswitch\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554471 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-run-ovn-kubernetes\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554493 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-system-cni-dir\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554518 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-run-ovn-kubernetes\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554494 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-etc-openvswitch\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554518 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-etc-kubernetes\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554561 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-system-cni-dir\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554584 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-env-overrides\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554609 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-run-k8s-cni-cncf-io\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554631 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-multus-conf-dir\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554667 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-var-lib-openvswitch\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554697 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-systemd\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554715 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-system-cni-dir\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554719 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-var-lib-openvswitch\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554743 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-cnibin\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554756 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-systemd\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554777 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-os-release\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554820 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovn-node-metrics-cert\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554947 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-cni-binary-copy\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.554988 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-cni-bin\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555011 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7q6f\" (UniqueName: \"kubernetes.io/projected/1b02c35a-be66-4cf6-afc0-12ddc2f74148-kube-api-access-h7q6f\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555031 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555050 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-openvswitch\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555066 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555087 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovnkube-config\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555159 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555202 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-var-lib-kubelet\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555232 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-cni-netd\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555252 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdjfm\" (UniqueName: \"kubernetes.io/projected/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-kube-api-access-tdjfm\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555270 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-cni-binary-copy\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555294 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-multus-daemon-config\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555309 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555316 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-ovn\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555343 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-ovn\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555345 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-node-log\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555363 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-node-log\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555399 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-cnibin\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555420 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-multus-cni-dir\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555430 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-env-overrides\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555437 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-hostroot\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555492 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-kubelet\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555520 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-log-socket\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555549 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovnkube-script-lib\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555532 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-cni-bin\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555637 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-log-socket\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555577 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-run-netns\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555719 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-slash\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555742 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-run-netns\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555771 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-multus-socket-dir-parent\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555803 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-var-lib-cni-bin\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555827 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-run-multus-certs\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555852 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-systemd-units\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555935 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-systemd-units\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555945 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-openvswitch\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555960 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-kubelet\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.555994 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-slash\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.556051 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-run-netns\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.556100 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-cnibin\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.556204 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-cni-netd\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.556484 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovnkube-config\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.556724 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.556744 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-cni-binary-copy\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.556760 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovnkube-script-lib\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.557099 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.558936 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovn-node-metrics-cert\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.567266 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.571799 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7q6f\" (UniqueName: \"kubernetes.io/projected/1b02c35a-be66-4cf6-afc0-12ddc2f74148-kube-api-access-h7q6f\") pod \"ovnkube-node-x28tq\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.572910 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdjfm\" (UniqueName: \"kubernetes.io/projected/bc4e8832-7db1-4026-aff5-c6d34b2b8f99-kube-api-access-tdjfm\") pod \"multus-additional-cni-plugins-vwq64\" (UID: \"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\") " pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.580297 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.592962 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.607752 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.626254 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.643614 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656614 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-multus-cni-dir\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656683 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-hostroot\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656711 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-run-netns\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656737 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-multus-socket-dir-parent\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656762 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-var-lib-cni-bin\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656783 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-run-multus-certs\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656802 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-hostroot\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656835 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-var-lib-cni-bin\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656872 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-os-release\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656886 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-multus-socket-dir-parent\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656896 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-run-multus-certs\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656808 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-run-netns\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656808 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-os-release\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656962 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppm9j\" (UniqueName: \"kubernetes.io/projected/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-kube-api-access-ppm9j\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.656989 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-var-lib-cni-multus\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657025 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-run-k8s-cni-cncf-io\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657026 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-multus-cni-dir\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657063 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-multus-conf-dir\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657111 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-run-k8s-cni-cncf-io\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657042 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-multus-conf-dir\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657165 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-etc-kubernetes\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657187 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-var-lib-cni-multus\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657199 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-system-cni-dir\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657223 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-etc-kubernetes\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657273 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-cnibin\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657335 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-var-lib-kubelet\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657369 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-cni-binary-copy\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657389 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-multus-daemon-config\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657443 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-host-var-lib-kubelet\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657457 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-system-cni-dir\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.657496 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-cnibin\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.658143 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-cni-binary-copy\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.658274 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-multus-daemon-config\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.658335 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.670355 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.674248 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppm9j\" (UniqueName: \"kubernetes.io/projected/850f083c-ad86-47bb-8fd1-4f2a4a9e7831-kube-api-access-ppm9j\") pod \"multus-8qf2w\" (UID: \"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\") " pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.686401 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.688632 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.701671 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: W1125 19:34:00.701975 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b02c35a_be66_4cf6_afc0_12ddc2f74148.slice/crio-efd7930e4c966536d20eee32743a0eb112a93a76fc1aab48a2c03e46779aff1b WatchSource:0}: Error finding container efd7930e4c966536d20eee32743a0eb112a93a76fc1aab48a2c03e46779aff1b: Status 404 returned error can't find the container with id efd7930e4c966536d20eee32743a0eb112a93a76fc1aab48a2c03e46779aff1b Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.704050 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vwq64" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.712623 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8qf2w" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.719267 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: W1125 19:34:00.728494 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod850f083c_ad86_47bb_8fd1_4f2a4a9e7831.slice/crio-1515d49eb927c7a8493000194e2e37f0e5af17a151cf4681b582a4760298ce02 WatchSource:0}: Error finding container 1515d49eb927c7a8493000194e2e37f0e5af17a151cf4681b582a4760298ce02: Status 404 returned error can't find the container with id 1515d49eb927c7a8493000194e2e37f0e5af17a151cf4681b582a4760298ce02 Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.733187 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.760009 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07f3bfc26632516442b79199b4f205bcde568ac3c73dac5b3b4191f101732389\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:53Z\\\",\\\"message\\\":\\\"W1125 19:33:42.426022 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 19:33:42.426496 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764099222 cert, and key in /tmp/serving-cert-1691412025/serving-signer.crt, /tmp/serving-cert-1691412025/serving-signer.key\\\\nI1125 19:33:42.746347 1 observer_polling.go:159] Starting file observer\\\\nW1125 19:33:42.749929 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 19:33:42.750116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:42.752088 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1691412025/tls.crt::/tmp/serving-cert-1691412025/tls.key\\\\\\\"\\\\nF1125 19:33:53.002976 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.783625 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:00Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.846845 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.846901 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.846932 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.847011 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.847198 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:00 crc kubenswrapper[4775]: E1125 19:34:00.847340 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.852458 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.853617 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.854481 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.856408 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.858337 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 25 19:34:00 crc kubenswrapper[4775]: I1125 19:34:00.858865 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.033561 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" event={"ID":"bc4e8832-7db1-4026-aff5-c6d34b2b8f99","Type":"ContainerStarted","Data":"06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979"} Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.033629 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" event={"ID":"bc4e8832-7db1-4026-aff5-c6d34b2b8f99","Type":"ContainerStarted","Data":"17b98b731a0ec001162ddfca0801c12c503dcf84f86a7282803ac29614eb83dc"} Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.035889 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerID="114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356" exitCode=0 Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.035956 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356"} Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.035979 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerStarted","Data":"efd7930e4c966536d20eee32743a0eb112a93a76fc1aab48a2c03e46779aff1b"} Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.038380 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2"} Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.038446 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c"} Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.038461 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"e3b510acc1e40e0cdcdfca8fd9b56759973c1af6c8235f375f7f610d3463ac21"} Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.040933 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.043170 4775 scope.go:117] "RemoveContainer" containerID="bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89" Nov 25 19:34:01 crc kubenswrapper[4775]: E1125 19:34:01.043369 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.044588 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qf2w" event={"ID":"850f083c-ad86-47bb-8fd1-4f2a4a9e7831","Type":"ContainerStarted","Data":"cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079"} Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.044624 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qf2w" event={"ID":"850f083c-ad86-47bb-8fd1-4f2a4a9e7831","Type":"ContainerStarted","Data":"1515d49eb927c7a8493000194e2e37f0e5af17a151cf4681b582a4760298ce02"} Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.046187 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8p9p9" event={"ID":"3099556d-7e22-4d2c-9dcc-1a8465a2bd32","Type":"ContainerStarted","Data":"a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370"} Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.062644 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07f3bfc26632516442b79199b4f205bcde568ac3c73dac5b3b4191f101732389\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:53Z\\\",\\\"message\\\":\\\"W1125 19:33:42.426022 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 19:33:42.426496 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764099222 cert, and key in /tmp/serving-cert-1691412025/serving-signer.crt, /tmp/serving-cert-1691412025/serving-signer.key\\\\nI1125 19:33:42.746347 1 observer_polling.go:159] Starting file observer\\\\nW1125 19:33:42.749929 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 19:33:42.750116 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:42.752088 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1691412025/tls.crt::/tmp/serving-cert-1691412025/tls.key\\\\\\\"\\\\nF1125 19:33:53.002976 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.086875 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.105736 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.128113 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.143746 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.168686 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.188473 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.208248 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.222100 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.237929 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.251267 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.264807 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.278370 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.302142 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.317015 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.339454 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.355578 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.369667 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.384373 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.404876 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.421860 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.434040 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.447633 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:01 crc kubenswrapper[4775]: I1125 19:34:01.462102 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:01Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.054446 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17"} Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.060429 4775 generic.go:334] "Generic (PLEG): container finished" podID="bc4e8832-7db1-4026-aff5-c6d34b2b8f99" containerID="06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979" exitCode=0 Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.060528 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" event={"ID":"bc4e8832-7db1-4026-aff5-c6d34b2b8f99","Type":"ContainerDied","Data":"06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979"} Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.072431 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.072902 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerStarted","Data":"e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505"} Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.072946 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerStarted","Data":"d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465"} Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.072961 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerStarted","Data":"30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b"} Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.072974 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerStarted","Data":"b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536"} Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.091233 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.112493 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.132724 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.164926 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.183453 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.205520 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.226915 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.242766 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.257196 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.273919 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.289081 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.305033 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.330401 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.343733 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.360433 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.376205 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.392193 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.405347 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.420829 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.435909 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.448214 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.463981 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.477233 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:02Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.478546 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.478672 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.478703 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.478744 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:34:06.478708668 +0000 UTC m=+28.395071034 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.478792 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.478804 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.478840 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:06.478826502 +0000 UTC m=+28.395188868 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.478881 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.478883 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.478911 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.478928 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.478968 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:06.478955906 +0000 UTC m=+28.395318492 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.479042 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.479051 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.479069 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.479075 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:06.479064558 +0000 UTC m=+28.395426924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.479082 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.479121 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:06.47911367 +0000 UTC m=+28.395476036 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.623017 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.624179 4775 scope.go:117] "RemoveContainer" containerID="bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89" Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.624604 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.846443 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.846489 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:02 crc kubenswrapper[4775]: I1125 19:34:02.846553 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.846726 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.846883 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:02 crc kubenswrapper[4775]: E1125 19:34:02.846993 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.085906 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerStarted","Data":"9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1"} Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.085984 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerStarted","Data":"ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e"} Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.089193 4775 generic.go:334] "Generic (PLEG): container finished" podID="bc4e8832-7db1-4026-aff5-c6d34b2b8f99" containerID="a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8" exitCode=0 Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.089326 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" event={"ID":"bc4e8832-7db1-4026-aff5-c6d34b2b8f99","Type":"ContainerDied","Data":"a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8"} Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.106303 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:03Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.131939 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:03Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.150753 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:03Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.167490 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:03Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.195239 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:03Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.213128 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:03Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.229699 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:03Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.248910 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:03Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.267120 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:03Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.282497 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:03Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.297417 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:03Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:03 crc kubenswrapper[4775]: I1125 19:34:03.313871 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:03Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.096610 4775 generic.go:334] "Generic (PLEG): container finished" podID="bc4e8832-7db1-4026-aff5-c6d34b2b8f99" containerID="86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9" exitCode=0 Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.096734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" event={"ID":"bc4e8832-7db1-4026-aff5-c6d34b2b8f99","Type":"ContainerDied","Data":"86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9"} Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.117204 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.137813 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.159272 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.185690 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.220981 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.241598 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.264028 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.284023 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.303157 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.321013 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.322494 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.328790 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.334274 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.346192 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.367087 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.398121 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.418238 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.447802 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.472693 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.485106 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.487744 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.487792 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.487805 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.487930 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.494131 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.499473 4775 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.499824 4775 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.501388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.501439 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.501458 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.501482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.501499 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:04Z","lastTransitionTime":"2025-11-25T19:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.514735 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: E1125 19:34:04.523384 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.528971 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.529018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.529031 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.529054 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.529068 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:04Z","lastTransitionTime":"2025-11-25T19:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.533935 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.549523 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: E1125 19:34:04.552243 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.559072 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.559132 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.559153 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.559180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.559199 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:04Z","lastTransitionTime":"2025-11-25T19:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.577716 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: E1125 19:34:04.578309 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.582734 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.582795 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.582811 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.582833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.582851 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:04Z","lastTransitionTime":"2025-11-25T19:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.598369 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: E1125 19:34:04.599189 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.604635 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.604691 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.604706 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.604728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.604743 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:04Z","lastTransitionTime":"2025-11-25T19:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.614693 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: E1125 19:34:04.627268 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: E1125 19:34:04.627526 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.630512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.630563 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.630583 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.630606 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.630621 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:04Z","lastTransitionTime":"2025-11-25T19:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.635795 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.667214 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:04Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.733955 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.734015 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.734033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.734060 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.734081 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:04Z","lastTransitionTime":"2025-11-25T19:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.837810 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.838583 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.838685 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.838728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.838749 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:04Z","lastTransitionTime":"2025-11-25T19:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.846941 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.847005 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:04 crc kubenswrapper[4775]: E1125 19:34:04.847119 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:04 crc kubenswrapper[4775]: E1125 19:34:04.847345 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.847492 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:04 crc kubenswrapper[4775]: E1125 19:34:04.847592 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.941607 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.941641 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.941681 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.941696 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:04 crc kubenswrapper[4775]: I1125 19:34:04.941705 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:04Z","lastTransitionTime":"2025-11-25T19:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.045329 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.045444 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.045464 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.045496 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.045516 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:05Z","lastTransitionTime":"2025-11-25T19:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.106329 4775 generic.go:334] "Generic (PLEG): container finished" podID="bc4e8832-7db1-4026-aff5-c6d34b2b8f99" containerID="f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1" exitCode=0 Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.106453 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" event={"ID":"bc4e8832-7db1-4026-aff5-c6d34b2b8f99","Type":"ContainerDied","Data":"f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1"} Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.114432 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerStarted","Data":"52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8"} Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.130379 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:05Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.149401 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.149469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.149487 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.149516 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.149536 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:05Z","lastTransitionTime":"2025-11-25T19:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.152507 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:05Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.176982 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:05Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.204620 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:05Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.238893 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:05Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.253261 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.253329 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.253348 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.253421 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.253491 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:05Z","lastTransitionTime":"2025-11-25T19:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.265431 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:05Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.287082 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:05Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.317960 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:05Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.345483 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:05Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.357969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.358042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.358064 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.358094 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.358115 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:05Z","lastTransitionTime":"2025-11-25T19:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.370992 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:05Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.388859 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:05Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.404145 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:05Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.421997 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:05Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.462004 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.462449 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.462574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.462712 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.462863 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:05Z","lastTransitionTime":"2025-11-25T19:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.566729 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.566795 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.566815 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.566844 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.566864 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:05Z","lastTransitionTime":"2025-11-25T19:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.670440 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.670495 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.670540 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.670565 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.670587 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:05Z","lastTransitionTime":"2025-11-25T19:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.774309 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.774389 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.774416 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.774449 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.774474 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:05Z","lastTransitionTime":"2025-11-25T19:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.877446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.877508 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.877528 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.877553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.877573 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:05Z","lastTransitionTime":"2025-11-25T19:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.980399 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.980466 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.980490 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.980520 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:05 crc kubenswrapper[4775]: I1125 19:34:05.980541 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:05Z","lastTransitionTime":"2025-11-25T19:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.084092 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.084180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.084198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.084223 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.084245 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:06Z","lastTransitionTime":"2025-11-25T19:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.122952 4775 generic.go:334] "Generic (PLEG): container finished" podID="bc4e8832-7db1-4026-aff5-c6d34b2b8f99" containerID="c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986" exitCode=0 Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.123015 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" event={"ID":"bc4e8832-7db1-4026-aff5-c6d34b2b8f99","Type":"ContainerDied","Data":"c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986"} Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.146742 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:06Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.167296 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:06Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.187023 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.187260 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.187273 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.187292 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.187305 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:06Z","lastTransitionTime":"2025-11-25T19:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.194130 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:06Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.214115 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:06Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.231883 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:06Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.246428 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:06Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.275049 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:06Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.289387 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.289425 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.289436 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.289455 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.289468 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:06Z","lastTransitionTime":"2025-11-25T19:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.291813 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:06Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.309886 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:06Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.325396 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:06Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.340082 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:06Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.359362 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:06Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.373939 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:06Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.392070 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.392116 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.392127 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.392146 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.392158 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:06Z","lastTransitionTime":"2025-11-25T19:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.494924 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.495003 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.495021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.495057 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.495083 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:06Z","lastTransitionTime":"2025-11-25T19:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.519746 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.519912 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.519971 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.520039 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:34:14.519995444 +0000 UTC m=+36.436357840 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.520107 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.520155 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.520186 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.520202 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.520207 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.520288 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.520205 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.520356 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.520401 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:14.520373504 +0000 UTC m=+36.436735910 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.520404 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.520433 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:14.520421086 +0000 UTC m=+36.436783492 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.520479 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.520499 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:14.520467027 +0000 UTC m=+36.436829503 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.520532 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:14.520517808 +0000 UTC m=+36.436880204 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.605574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.605633 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.605676 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.605702 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.605718 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:06Z","lastTransitionTime":"2025-11-25T19:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.710865 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.711443 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.711456 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.711475 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.711486 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:06Z","lastTransitionTime":"2025-11-25T19:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.814417 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.814482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.814500 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.814532 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.814553 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:06Z","lastTransitionTime":"2025-11-25T19:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.846171 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.846261 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.846332 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.846642 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.846725 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:06 crc kubenswrapper[4775]: E1125 19:34:06.846964 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.917518 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.917559 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.917568 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.917586 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:06 crc kubenswrapper[4775]: I1125 19:34:06.917597 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:06Z","lastTransitionTime":"2025-11-25T19:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.020865 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.020933 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.020952 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.020981 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.021003 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:07Z","lastTransitionTime":"2025-11-25T19:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.125196 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.125278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.125305 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.125338 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.125360 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:07Z","lastTransitionTime":"2025-11-25T19:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.134587 4775 generic.go:334] "Generic (PLEG): container finished" podID="bc4e8832-7db1-4026-aff5-c6d34b2b8f99" containerID="cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365" exitCode=0 Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.134726 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" event={"ID":"bc4e8832-7db1-4026-aff5-c6d34b2b8f99","Type":"ContainerDied","Data":"cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365"} Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.142915 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerStarted","Data":"f835e48050eea8a4a1e19c6002ff3922acd81c5bb1231e87d3ff4c44b0566680"} Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.143605 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.143678 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.162001 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.187610 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.204124 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.204359 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.210115 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.229863 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.229950 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.229972 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.230029 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.230049 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:07Z","lastTransitionTime":"2025-11-25T19:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.236140 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.258293 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.276200 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.296387 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.314492 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.333121 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.333370 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.333401 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.333434 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.333454 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:07Z","lastTransitionTime":"2025-11-25T19:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.335550 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.354235 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.372482 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.394696 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.409002 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.428581 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.436018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.436070 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.436083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.436106 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.436122 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:07Z","lastTransitionTime":"2025-11-25T19:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.451390 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.478473 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.503151 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.523786 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.539722 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.539785 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.539802 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.539826 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.539845 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:07Z","lastTransitionTime":"2025-11-25T19:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.544196 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.560339 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.576407 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.599209 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.619150 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.640554 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.642690 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.642749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.642771 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.642798 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.642818 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:07Z","lastTransitionTime":"2025-11-25T19:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.664879 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.698155 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835e48050eea8a4a1e19c6002ff3922acd81c5bb1231e87d3ff4c44b0566680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.746262 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.746331 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.746346 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.746772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.746819 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:07Z","lastTransitionTime":"2025-11-25T19:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.850610 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.851182 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.851199 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.851226 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.851245 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:07Z","lastTransitionTime":"2025-11-25T19:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.955168 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.955237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.955250 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.955270 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:07 crc kubenswrapper[4775]: I1125 19:34:07.955283 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:07Z","lastTransitionTime":"2025-11-25T19:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.058076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.058116 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.058127 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.058146 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.058161 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:08Z","lastTransitionTime":"2025-11-25T19:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.150872 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" event={"ID":"bc4e8832-7db1-4026-aff5-c6d34b2b8f99","Type":"ContainerStarted","Data":"faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49"} Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.150935 4775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.160921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.160959 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.160972 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.160992 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.161007 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:08Z","lastTransitionTime":"2025-11-25T19:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.172387 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.189012 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.212353 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.246205 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.263803 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.263852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.263867 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.263889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.263906 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:08Z","lastTransitionTime":"2025-11-25T19:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.284216 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835e48050eea8a4a1e19c6002ff3922acd81c5bb1231e87d3ff4c44b0566680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.307700 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.322606 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.337342 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.356017 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.366703 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.366738 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.366747 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.366763 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.366774 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:08Z","lastTransitionTime":"2025-11-25T19:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.371153 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.384029 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.399535 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.416899 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.470724 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.470774 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.470786 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.470805 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.470821 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:08Z","lastTransitionTime":"2025-11-25T19:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.499270 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.574044 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.574097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.574109 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.574131 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.574145 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:08Z","lastTransitionTime":"2025-11-25T19:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.677722 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.677766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.677777 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.677791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.677801 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:08Z","lastTransitionTime":"2025-11-25T19:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.781400 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.781520 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.781536 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.781558 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.781571 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:08Z","lastTransitionTime":"2025-11-25T19:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.846195 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.846217 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.846345 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:08 crc kubenswrapper[4775]: E1125 19:34:08.846549 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:08 crc kubenswrapper[4775]: E1125 19:34:08.846699 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:08 crc kubenswrapper[4775]: E1125 19:34:08.846943 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.884635 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.884695 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.884706 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.884722 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.884733 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:08Z","lastTransitionTime":"2025-11-25T19:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.907359 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.930052 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.946728 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.963050 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.987020 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.987124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.987141 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.987163 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.987179 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:08Z","lastTransitionTime":"2025-11-25T19:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:08 crc kubenswrapper[4775]: I1125 19:34:08.991967 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835e48050eea8a4a1e19c6002ff3922acd81c5bb1231e87d3ff4c44b0566680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.011126 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.035756 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.052321 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.066952 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.083934 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.092051 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.092101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.092113 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.092133 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.092145 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:09Z","lastTransitionTime":"2025-11-25T19:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.102720 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.118753 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.138228 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.195449 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.195524 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.195544 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.195570 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.195589 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:09Z","lastTransitionTime":"2025-11-25T19:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.298932 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.299358 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.299521 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.299658 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.299815 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:09Z","lastTransitionTime":"2025-11-25T19:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.402621 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.403000 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.403124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.403295 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.403321 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:09Z","lastTransitionTime":"2025-11-25T19:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.506348 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.506394 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.506404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.506421 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.506433 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:09Z","lastTransitionTime":"2025-11-25T19:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.609807 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.610242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.610386 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.610573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.610889 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:09Z","lastTransitionTime":"2025-11-25T19:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.714773 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.714855 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.714879 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.714912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.714935 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:09Z","lastTransitionTime":"2025-11-25T19:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.816956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.817000 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.817013 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.817030 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.817048 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:09Z","lastTransitionTime":"2025-11-25T19:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.919885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.919931 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.919944 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.919962 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:09 crc kubenswrapper[4775]: I1125 19:34:09.919976 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:09Z","lastTransitionTime":"2025-11-25T19:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.023547 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.023602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.023614 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.023637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.023688 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:10Z","lastTransitionTime":"2025-11-25T19:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.126681 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.126753 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.126772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.126802 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.126821 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:10Z","lastTransitionTime":"2025-11-25T19:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.160384 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/0.log" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.164237 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerID="f835e48050eea8a4a1e19c6002ff3922acd81c5bb1231e87d3ff4c44b0566680" exitCode=1 Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.164299 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"f835e48050eea8a4a1e19c6002ff3922acd81c5bb1231e87d3ff4c44b0566680"} Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.164859 4775 scope.go:117] "RemoveContainer" containerID="f835e48050eea8a4a1e19c6002ff3922acd81c5bb1231e87d3ff4c44b0566680" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.186742 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:10Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.206641 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:10Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.225046 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:10Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.229292 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.229372 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.229397 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.229428 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.229458 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:10Z","lastTransitionTime":"2025-11-25T19:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.245953 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:10Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.265218 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:10Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.279496 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:10Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.297004 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:10Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.319525 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835e48050eea8a4a1e19c6002ff3922acd81c5bb1231e87d3ff4c44b0566680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f835e48050eea8a4a1e19c6002ff3922acd81c5bb1231e87d3ff4c44b0566680\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:10Z\\\",\\\"message\\\":\\\"lient-go/informers/factory.go:160\\\\nI1125 19:34:09.967089 6028 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 19:34:09.967718 6028 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 19:34:09.967763 6028 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 19:34:09.967782 6028 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 19:34:09.967787 6028 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 19:34:09.967800 6028 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 19:34:09.967805 6028 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 19:34:09.967818 6028 factory.go:656] Stopping watch factory\\\\nI1125 19:34:09.967836 6028 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 19:34:09.967887 6028 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 19:34:09.967900 6028 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:09.967908 6028 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 19:34:09.967916 6028 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 19:34:09.967924 6028 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:10Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.333615 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.333691 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.333721 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.333749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.333767 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:10Z","lastTransitionTime":"2025-11-25T19:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.338448 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:10Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.365945 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:10Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.387077 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:10Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.399461 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:10Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.416046 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:10Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.435955 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.435996 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.436007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.436022 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.436035 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:10Z","lastTransitionTime":"2025-11-25T19:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.538868 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.538913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.538925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.538944 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.538958 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:10Z","lastTransitionTime":"2025-11-25T19:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.646678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.646726 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.646740 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.646762 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.646805 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:10Z","lastTransitionTime":"2025-11-25T19:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.750565 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.750641 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.750695 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.750726 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.750750 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:10Z","lastTransitionTime":"2025-11-25T19:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.847145 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.847159 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:10 crc kubenswrapper[4775]: E1125 19:34:10.847386 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.847302 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:10 crc kubenswrapper[4775]: E1125 19:34:10.847504 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:10 crc kubenswrapper[4775]: E1125 19:34:10.847669 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.852881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.852918 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.852928 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.852945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.852958 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:10Z","lastTransitionTime":"2025-11-25T19:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.955448 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.955545 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.955574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.955609 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:10 crc kubenswrapper[4775]: I1125 19:34:10.955634 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:10Z","lastTransitionTime":"2025-11-25T19:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.059208 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.059280 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.059301 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.059331 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.059353 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:11Z","lastTransitionTime":"2025-11-25T19:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.162880 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.162928 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.162941 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.162959 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.162972 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:11Z","lastTransitionTime":"2025-11-25T19:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.170656 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/0.log" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.174798 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerStarted","Data":"d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c"} Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.175246 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.196908 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:11Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.219421 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:11Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.236872 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:11Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.255388 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:11Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.266422 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.266488 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.266510 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.266537 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.266556 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:11Z","lastTransitionTime":"2025-11-25T19:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.284216 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f835e48050eea8a4a1e19c6002ff3922acd81c5bb1231e87d3ff4c44b0566680\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:10Z\\\",\\\"message\\\":\\\"lient-go/informers/factory.go:160\\\\nI1125 19:34:09.967089 6028 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 19:34:09.967718 6028 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 19:34:09.967763 6028 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 19:34:09.967782 6028 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 19:34:09.967787 6028 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 19:34:09.967800 6028 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 19:34:09.967805 6028 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 19:34:09.967818 6028 factory.go:656] Stopping watch factory\\\\nI1125 19:34:09.967836 6028 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 19:34:09.967887 6028 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 19:34:09.967900 6028 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:09.967908 6028 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 19:34:09.967916 6028 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 19:34:09.967924 6028 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:11Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.305775 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:11Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.331898 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:11Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.355971 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:11Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.369517 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.369585 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.369605 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.369633 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.369717 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:11Z","lastTransitionTime":"2025-11-25T19:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.376121 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:11Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.400191 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:11Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.420847 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:11Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.437108 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:11Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.454789 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:11Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.472361 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.472431 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.472443 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.472471 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.472488 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:11Z","lastTransitionTime":"2025-11-25T19:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.575958 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.576028 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.576045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.576073 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.576096 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:11Z","lastTransitionTime":"2025-11-25T19:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.679846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.679911 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.679929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.679956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.679975 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:11Z","lastTransitionTime":"2025-11-25T19:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.790699 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.790759 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.790777 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.790802 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.790822 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:11Z","lastTransitionTime":"2025-11-25T19:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.893445 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.893487 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.893504 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.893524 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.893540 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:11Z","lastTransitionTime":"2025-11-25T19:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.997489 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.997557 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.997576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.997605 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:11 crc kubenswrapper[4775]: I1125 19:34:11.997625 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:11Z","lastTransitionTime":"2025-11-25T19:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.100363 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.100424 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.100435 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.100460 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.100475 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:12Z","lastTransitionTime":"2025-11-25T19:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.180674 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/1.log" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.181370 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/0.log" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.184636 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerID="d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c" exitCode=1 Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.184715 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c"} Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.184818 4775 scope.go:117] "RemoveContainer" containerID="f835e48050eea8a4a1e19c6002ff3922acd81c5bb1231e87d3ff4c44b0566680" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.185823 4775 scope.go:117] "RemoveContainer" containerID="d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c" Nov 25 19:34:12 crc kubenswrapper[4775]: E1125 19:34:12.186224 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.204119 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.204183 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.204202 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.204228 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.204247 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:12Z","lastTransitionTime":"2025-11-25T19:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.210600 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:12Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.232903 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:12Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.255784 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:12Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.271633 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:12Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.296284 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:12Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.307621 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.307748 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.307774 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.307809 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.307836 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:12Z","lastTransitionTime":"2025-11-25T19:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.315084 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:12Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.333813 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:12Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.355161 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:12Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.374469 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:12Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.391175 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:12Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.409435 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:12Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.410929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.410961 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.410971 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.410988 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.411001 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:12Z","lastTransitionTime":"2025-11-25T19:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.432376 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:12Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.464309 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f835e48050eea8a4a1e19c6002ff3922acd81c5bb1231e87d3ff4c44b0566680\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:10Z\\\",\\\"message\\\":\\\"lient-go/informers/factory.go:160\\\\nI1125 19:34:09.967089 6028 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 19:34:09.967718 6028 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 19:34:09.967763 6028 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 19:34:09.967782 6028 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 19:34:09.967787 6028 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 19:34:09.967800 6028 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 19:34:09.967805 6028 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 19:34:09.967818 6028 factory.go:656] Stopping watch factory\\\\nI1125 19:34:09.967836 6028 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 19:34:09.967887 6028 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 19:34:09.967900 6028 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:09.967908 6028 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 19:34:09.967916 6028 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 19:34:09.967924 6028 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:11Z\\\",\\\"message\\\":\\\"4:11.167546 6157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 19:34:11.167521 6157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 19:34:11.167558 6157 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.382527ms\\\\nI1125 19:34:11.167535 6157 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:11.167595 6157 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nF1125 19:34:11.167621 6157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:12Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.514959 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.515023 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.515041 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.515065 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.515086 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:12Z","lastTransitionTime":"2025-11-25T19:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.619195 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.619804 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.619829 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.619859 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.619885 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:12Z","lastTransitionTime":"2025-11-25T19:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.723191 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.723291 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.723309 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.723338 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.723360 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:12Z","lastTransitionTime":"2025-11-25T19:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.827306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.827389 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.827820 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.827862 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.827887 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:12Z","lastTransitionTime":"2025-11-25T19:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.846494 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.846577 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:12 crc kubenswrapper[4775]: E1125 19:34:12.846763 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.846922 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:12 crc kubenswrapper[4775]: E1125 19:34:12.847136 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:12 crc kubenswrapper[4775]: E1125 19:34:12.847296 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.931346 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.931425 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.931449 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.931483 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:12 crc kubenswrapper[4775]: I1125 19:34:12.931507 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:12Z","lastTransitionTime":"2025-11-25T19:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.034436 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.034494 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.034512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.034539 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.034558 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:13Z","lastTransitionTime":"2025-11-25T19:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.137321 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.137365 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.137383 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.137405 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.137424 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:13Z","lastTransitionTime":"2025-11-25T19:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.191603 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/1.log" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.196576 4775 scope.go:117] "RemoveContainer" containerID="d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c" Nov 25 19:34:13 crc kubenswrapper[4775]: E1125 19:34:13.196787 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.206385 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4"] Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.207323 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.210294 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.211259 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.223621 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.239914 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.239955 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.239967 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.239983 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.239995 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:13Z","lastTransitionTime":"2025-11-25T19:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.241232 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.264861 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.287692 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.311804 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.330355 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.342905 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.343012 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.343031 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.343055 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.343074 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:13Z","lastTransitionTime":"2025-11-25T19:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.354624 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.387322 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:11Z\\\",\\\"message\\\":\\\"4:11.167546 6157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 19:34:11.167521 6157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 19:34:11.167558 6157 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.382527ms\\\\nI1125 19:34:11.167535 6157 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:11.167595 6157 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nF1125 19:34:11.167621 6157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.394993 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4349a7c-699e-446c-ac37-7fbf6310803d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w98l4\" (UID: \"b4349a7c-699e-446c-ac37-7fbf6310803d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.395069 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w7gm\" (UniqueName: \"kubernetes.io/projected/b4349a7c-699e-446c-ac37-7fbf6310803d-kube-api-access-6w7gm\") pod \"ovnkube-control-plane-749d76644c-w98l4\" (UID: \"b4349a7c-699e-446c-ac37-7fbf6310803d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.395140 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4349a7c-699e-446c-ac37-7fbf6310803d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w98l4\" (UID: \"b4349a7c-699e-446c-ac37-7fbf6310803d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.395341 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4349a7c-699e-446c-ac37-7fbf6310803d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w98l4\" (UID: \"b4349a7c-699e-446c-ac37-7fbf6310803d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.407601 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.429700 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.445749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.445801 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.445813 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.445834 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.445852 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:13Z","lastTransitionTime":"2025-11-25T19:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.451508 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.469411 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.492709 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.495877 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w7gm\" (UniqueName: \"kubernetes.io/projected/b4349a7c-699e-446c-ac37-7fbf6310803d-kube-api-access-6w7gm\") pod \"ovnkube-control-plane-749d76644c-w98l4\" (UID: \"b4349a7c-699e-446c-ac37-7fbf6310803d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.495933 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4349a7c-699e-446c-ac37-7fbf6310803d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w98l4\" (UID: \"b4349a7c-699e-446c-ac37-7fbf6310803d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.495986 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4349a7c-699e-446c-ac37-7fbf6310803d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w98l4\" (UID: \"b4349a7c-699e-446c-ac37-7fbf6310803d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.496025 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4349a7c-699e-446c-ac37-7fbf6310803d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w98l4\" (UID: \"b4349a7c-699e-446c-ac37-7fbf6310803d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.496874 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4349a7c-699e-446c-ac37-7fbf6310803d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w98l4\" (UID: \"b4349a7c-699e-446c-ac37-7fbf6310803d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.497047 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4349a7c-699e-446c-ac37-7fbf6310803d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w98l4\" (UID: \"b4349a7c-699e-446c-ac37-7fbf6310803d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.506602 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4349a7c-699e-446c-ac37-7fbf6310803d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w98l4\" (UID: \"b4349a7c-699e-446c-ac37-7fbf6310803d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.513824 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.516497 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w7gm\" (UniqueName: \"kubernetes.io/projected/b4349a7c-699e-446c-ac37-7fbf6310803d-kube-api-access-6w7gm\") pod \"ovnkube-control-plane-749d76644c-w98l4\" (UID: \"b4349a7c-699e-446c-ac37-7fbf6310803d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.530414 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.534609 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.548916 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.548976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.548996 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.549023 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.549043 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:13Z","lastTransitionTime":"2025-11-25T19:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:13 crc kubenswrapper[4775]: W1125 19:34:13.550301 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4349a7c_699e_446c_ac37_7fbf6310803d.slice/crio-68338da2706eede4239638a66dadae4497abfcd860642d0f803dba4f2f781cc7 WatchSource:0}: Error finding container 68338da2706eede4239638a66dadae4497abfcd860642d0f803dba4f2f781cc7: Status 404 returned error can't find the container with id 68338da2706eede4239638a66dadae4497abfcd860642d0f803dba4f2f781cc7 Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.556476 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.571265 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.595542 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.616341 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.641476 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.652133 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.652178 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.652192 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.652214 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.652229 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:13Z","lastTransitionTime":"2025-11-25T19:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.658102 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.675156 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.693169 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.709264 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.726215 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.750580 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:11Z\\\",\\\"message\\\":\\\"4:11.167546 6157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 19:34:11.167521 6157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 19:34:11.167558 6157 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.382527ms\\\\nI1125 19:34:11.167535 6157 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:11.167595 6157 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nF1125 19:34:11.167621 6157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.755626 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.755812 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.755927 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.756059 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.756180 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:13Z","lastTransitionTime":"2025-11-25T19:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.763398 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:13Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.847878 4775 scope.go:117] "RemoveContainer" containerID="bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.859038 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.859078 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.859089 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.859108 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.859122 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:13Z","lastTransitionTime":"2025-11-25T19:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.962182 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.962241 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.962255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.962278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:13 crc kubenswrapper[4775]: I1125 19:34:13.962297 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:13Z","lastTransitionTime":"2025-11-25T19:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.065829 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.065885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.065898 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.065918 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.065932 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:14Z","lastTransitionTime":"2025-11-25T19:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.168351 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.168406 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.168417 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.168442 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.168456 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:14Z","lastTransitionTime":"2025-11-25T19:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.201771 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.204031 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.204416 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.206208 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" event={"ID":"b4349a7c-699e-446c-ac37-7fbf6310803d","Type":"ContainerStarted","Data":"56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.206243 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" event={"ID":"b4349a7c-699e-446c-ac37-7fbf6310803d","Type":"ContainerStarted","Data":"050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.206256 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" event={"ID":"b4349a7c-699e-446c-ac37-7fbf6310803d","Type":"ContainerStarted","Data":"68338da2706eede4239638a66dadae4497abfcd860642d0f803dba4f2f781cc7"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.220001 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.240619 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:11Z\\\",\\\"message\\\":\\\"4:11.167546 6157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 19:34:11.167521 6157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 19:34:11.167558 6157 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.382527ms\\\\nI1125 19:34:11.167535 6157 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:11.167595 6157 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nF1125 19:34:11.167621 6157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.256288 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.270631 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.271314 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.271388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.271413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.271472 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.271494 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:14Z","lastTransitionTime":"2025-11-25T19:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.285226 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.300085 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.312760 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.333436 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.347073 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.363746 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-94nmx"] Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.364307 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-69dvc"] Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.364568 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-94nmx" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.365316 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.365711 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.365828 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.369878 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.370298 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.370719 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.370805 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.374309 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.374354 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.374374 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.374402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.374423 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:14Z","lastTransitionTime":"2025-11-25T19:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.391315 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.406256 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.417977 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.428682 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.446172 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.477678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.477726 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.477746 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.477771 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.477789 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:14Z","lastTransitionTime":"2025-11-25T19:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.491583 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:11Z\\\",\\\"message\\\":\\\"4:11.167546 6157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 19:34:11.167521 6157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 19:34:11.167558 6157 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.382527ms\\\\nI1125 19:34:11.167535 6157 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:11.167595 6157 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nF1125 19:34:11.167621 6157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.506477 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ba22b2a3-bdc5-4523-9574-9111a506778a-host\") pod \"node-ca-94nmx\" (UID: \"ba22b2a3-bdc5-4523-9574-9111a506778a\") " pod="openshift-image-registry/node-ca-94nmx" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.506562 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7z8l\" (UniqueName: \"kubernetes.io/projected/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-kube-api-access-j7z8l\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.506592 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ba22b2a3-bdc5-4523-9574-9111a506778a-serviceca\") pod \"node-ca-94nmx\" (UID: \"ba22b2a3-bdc5-4523-9574-9111a506778a\") " pod="openshift-image-registry/node-ca-94nmx" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.506610 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztrv5\" (UniqueName: \"kubernetes.io/projected/ba22b2a3-bdc5-4523-9574-9111a506778a-kube-api-access-ztrv5\") pod \"node-ca-94nmx\" (UID: \"ba22b2a3-bdc5-4523-9574-9111a506778a\") " pod="openshift-image-registry/node-ca-94nmx" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.506626 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.518878 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.531206 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.543562 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.556234 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.566396 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.580413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.580437 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.580447 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.580462 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.580473 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:14Z","lastTransitionTime":"2025-11-25T19:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.581766 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.592758 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.605700 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.607004 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.607086 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.607118 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.607144 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7z8l\" (UniqueName: \"kubernetes.io/projected/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-kube-api-access-j7z8l\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.607164 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.607182 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ba22b2a3-bdc5-4523-9574-9111a506778a-serviceca\") pod \"node-ca-94nmx\" (UID: \"ba22b2a3-bdc5-4523-9574-9111a506778a\") " pod="openshift-image-registry/node-ca-94nmx" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.607200 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztrv5\" (UniqueName: \"kubernetes.io/projected/ba22b2a3-bdc5-4523-9574-9111a506778a-kube-api-access-ztrv5\") pod \"node-ca-94nmx\" (UID: \"ba22b2a3-bdc5-4523-9574-9111a506778a\") " pod="openshift-image-registry/node-ca-94nmx" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.607218 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.607236 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ba22b2a3-bdc5-4523-9574-9111a506778a-host\") pod \"node-ca-94nmx\" (UID: \"ba22b2a3-bdc5-4523-9574-9111a506778a\") " pod="openshift-image-registry/node-ca-94nmx" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.607256 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.607386 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.607415 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.607428 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.607471 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:30.607455194 +0000 UTC m=+52.523817560 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.607707 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:34:30.60769952 +0000 UTC m=+52.524061886 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.607759 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.607771 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.607779 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.607803 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:30.607795303 +0000 UTC m=+52.524157669 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.607842 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.607870 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:30.607862185 +0000 UTC m=+52.524224551 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.608039 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.608069 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:34:30.60806296 +0000 UTC m=+52.524425326 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.608453 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.608508 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs podName:f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f nodeName:}" failed. No retries permitted until 2025-11-25 19:34:15.108500352 +0000 UTC m=+37.024862718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs") pod "network-metrics-daemon-69dvc" (UID: "f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.608536 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ba22b2a3-bdc5-4523-9574-9111a506778a-host\") pod \"node-ca-94nmx\" (UID: \"ba22b2a3-bdc5-4523-9574-9111a506778a\") " pod="openshift-image-registry/node-ca-94nmx" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.609586 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ba22b2a3-bdc5-4523-9574-9111a506778a-serviceca\") pod \"node-ca-94nmx\" (UID: \"ba22b2a3-bdc5-4523-9574-9111a506778a\") " pod="openshift-image-registry/node-ca-94nmx" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.618654 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.628615 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7z8l\" (UniqueName: \"kubernetes.io/projected/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-kube-api-access-j7z8l\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.628919 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztrv5\" (UniqueName: \"kubernetes.io/projected/ba22b2a3-bdc5-4523-9574-9111a506778a-kube-api-access-ztrv5\") pod \"node-ca-94nmx\" (UID: \"ba22b2a3-bdc5-4523-9574-9111a506778a\") " pod="openshift-image-registry/node-ca-94nmx" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.634866 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.646767 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.660603 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.680164 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.682495 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-94nmx" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.683089 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.683114 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.683127 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.683145 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.683159 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:14Z","lastTransitionTime":"2025-11-25T19:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.696786 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:14Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.786047 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.786111 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.786125 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.786149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.786162 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:14Z","lastTransitionTime":"2025-11-25T19:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.846358 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.846432 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.846527 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.846609 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.846817 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:14 crc kubenswrapper[4775]: E1125 19:34:14.846936 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.889027 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.889107 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.889160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.889194 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.889213 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:14Z","lastTransitionTime":"2025-11-25T19:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.992867 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.992922 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.992945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.992976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:14 crc kubenswrapper[4775]: I1125 19:34:14.992999 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:14Z","lastTransitionTime":"2025-11-25T19:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.000698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.000770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.000780 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.000796 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.000807 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: E1125 19:34:15.023074 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.029218 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.029274 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.029289 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.029311 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.029326 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: E1125 19:34:15.049150 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.054488 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.054564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.054583 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.054641 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.054706 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: E1125 19:34:15.072982 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.077530 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.077588 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.077600 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.077618 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.077630 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: E1125 19:34:15.093876 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.099232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.099283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.099317 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.099344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.099359 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.113320 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:15 crc kubenswrapper[4775]: E1125 19:34:15.113501 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:15 crc kubenswrapper[4775]: E1125 19:34:15.113581 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs podName:f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f nodeName:}" failed. No retries permitted until 2025-11-25 19:34:16.113558942 +0000 UTC m=+38.029921308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs") pod "network-metrics-daemon-69dvc" (UID: "f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:15 crc kubenswrapper[4775]: E1125 19:34:15.115170 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: E1125 19:34:15.115341 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.117511 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.117559 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.117579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.117604 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.117623 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.212497 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-94nmx" event={"ID":"ba22b2a3-bdc5-4523-9574-9111a506778a","Type":"ContainerStarted","Data":"efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961"} Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.212599 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-94nmx" event={"ID":"ba22b2a3-bdc5-4523-9574-9111a506778a","Type":"ContainerStarted","Data":"9a3cd1e4c9ca61c1aad7055d3a905da28485c3310c03717425acc5b491378b3a"} Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.220353 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.220405 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.220421 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.220442 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.220457 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.233457 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.257408 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.274607 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.302018 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.323697 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.323774 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.323796 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.323825 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.323847 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.339864 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:11Z\\\",\\\"message\\\":\\\"4:11.167546 6157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 19:34:11.167521 6157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 19:34:11.167558 6157 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.382527ms\\\\nI1125 19:34:11.167535 6157 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:11.167595 6157 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nF1125 19:34:11.167621 6157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.357644 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.376063 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.395336 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.413018 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.427225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.427299 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.427314 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.427336 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.427349 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.431791 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.452156 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.477898 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.496443 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.516117 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.530548 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.530599 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.530615 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.530635 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.530670 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.539755 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.564317 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:15Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.634744 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.634811 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.634829 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.634857 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.634882 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.739029 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.739086 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.739103 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.739129 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.739148 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.842976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.843061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.843076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.843101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.843116 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.846625 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:15 crc kubenswrapper[4775]: E1125 19:34:15.846900 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.947308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.947365 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.947374 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.947394 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:15 crc kubenswrapper[4775]: I1125 19:34:15.947405 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:15Z","lastTransitionTime":"2025-11-25T19:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.051502 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.051545 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.051554 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.051572 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.051584 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:16Z","lastTransitionTime":"2025-11-25T19:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.125013 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:16 crc kubenswrapper[4775]: E1125 19:34:16.125275 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:16 crc kubenswrapper[4775]: E1125 19:34:16.125390 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs podName:f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f nodeName:}" failed. No retries permitted until 2025-11-25 19:34:18.125362148 +0000 UTC m=+40.041724524 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs") pod "network-metrics-daemon-69dvc" (UID: "f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.154299 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.154346 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.154362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.154384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.154402 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:16Z","lastTransitionTime":"2025-11-25T19:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.256804 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.256976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.256995 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.257020 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.257039 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:16Z","lastTransitionTime":"2025-11-25T19:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.360475 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.360529 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.360544 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.360563 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.360574 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:16Z","lastTransitionTime":"2025-11-25T19:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.463976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.464073 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.464092 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.464120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.464138 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:16Z","lastTransitionTime":"2025-11-25T19:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.567468 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.567532 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.567553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.567579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.567597 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:16Z","lastTransitionTime":"2025-11-25T19:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.670464 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.670528 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.670548 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.670575 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.670598 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:16Z","lastTransitionTime":"2025-11-25T19:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.774375 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.774456 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.774476 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.774506 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.774531 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:16Z","lastTransitionTime":"2025-11-25T19:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.846331 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.846418 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.846622 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:16 crc kubenswrapper[4775]: E1125 19:34:16.846608 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:16 crc kubenswrapper[4775]: E1125 19:34:16.846853 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:16 crc kubenswrapper[4775]: E1125 19:34:16.847087 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.877397 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.877469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.877501 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.877536 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.877561 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:16Z","lastTransitionTime":"2025-11-25T19:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.980907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.980978 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.981008 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.981043 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:16 crc kubenswrapper[4775]: I1125 19:34:16.981074 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:16Z","lastTransitionTime":"2025-11-25T19:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.084311 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.084563 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.084581 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.084608 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.084628 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:17Z","lastTransitionTime":"2025-11-25T19:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.188379 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.188444 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.188462 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.188486 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.188504 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:17Z","lastTransitionTime":"2025-11-25T19:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.292123 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.292199 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.292221 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.292252 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.292271 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:17Z","lastTransitionTime":"2025-11-25T19:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.395919 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.395976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.395994 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.396019 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.396038 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:17Z","lastTransitionTime":"2025-11-25T19:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.499231 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.499279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.499298 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.499324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.499343 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:17Z","lastTransitionTime":"2025-11-25T19:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.602205 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.602480 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.602575 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.602680 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.602774 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:17Z","lastTransitionTime":"2025-11-25T19:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.706502 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.706589 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.706619 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.706695 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.706723 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:17Z","lastTransitionTime":"2025-11-25T19:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.810189 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.810497 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.810518 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.810547 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.810566 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:17Z","lastTransitionTime":"2025-11-25T19:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.846500 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:17 crc kubenswrapper[4775]: E1125 19:34:17.846799 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.919925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.919986 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.920006 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.920034 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:17 crc kubenswrapper[4775]: I1125 19:34:17.920055 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:17Z","lastTransitionTime":"2025-11-25T19:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.023039 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.023125 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.023152 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.023196 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.023222 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:18Z","lastTransitionTime":"2025-11-25T19:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.127969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.128025 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.128037 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.128058 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.128071 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:18Z","lastTransitionTime":"2025-11-25T19:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.150472 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:18 crc kubenswrapper[4775]: E1125 19:34:18.150741 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:18 crc kubenswrapper[4775]: E1125 19:34:18.150836 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs podName:f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f nodeName:}" failed. No retries permitted until 2025-11-25 19:34:22.150813929 +0000 UTC m=+44.067176305 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs") pod "network-metrics-daemon-69dvc" (UID: "f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.230985 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.231065 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.231085 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.231112 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.231133 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:18Z","lastTransitionTime":"2025-11-25T19:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.334690 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.334773 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.334799 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.334830 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.334853 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:18Z","lastTransitionTime":"2025-11-25T19:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.438508 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.438586 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.438604 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.438635 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.438702 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:18Z","lastTransitionTime":"2025-11-25T19:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.542801 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.542895 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.542914 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.542942 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.543003 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:18Z","lastTransitionTime":"2025-11-25T19:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.647315 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.647398 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.647417 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.647451 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.647471 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:18Z","lastTransitionTime":"2025-11-25T19:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.750908 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.750978 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.751002 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.751037 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.751061 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:18Z","lastTransitionTime":"2025-11-25T19:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.846207 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.846299 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.846345 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:18 crc kubenswrapper[4775]: E1125 19:34:18.846437 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:18 crc kubenswrapper[4775]: E1125 19:34:18.846604 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:18 crc kubenswrapper[4775]: E1125 19:34:18.846929 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.853866 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.853919 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.853937 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.853966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.853986 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:18Z","lastTransitionTime":"2025-11-25T19:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.871426 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:18Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.891422 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:18Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.908876 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:18Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.930142 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:18Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.957220 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.957299 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.957319 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.957349 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.957375 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:18Z","lastTransitionTime":"2025-11-25T19:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.967115 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:11Z\\\",\\\"message\\\":\\\"4:11.167546 6157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 19:34:11.167521 6157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 19:34:11.167558 6157 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.382527ms\\\\nI1125 19:34:11.167535 6157 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:11.167595 6157 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nF1125 19:34:11.167621 6157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:18Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:18 crc kubenswrapper[4775]: I1125 19:34:18.984342 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:18Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.006251 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:19Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.021330 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:19Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.041484 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:19Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.060116 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.060218 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.060240 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.060283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.060305 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:19Z","lastTransitionTime":"2025-11-25T19:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.071386 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:19Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.092368 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:19Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.111388 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:19Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.127433 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:19Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.144384 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:19Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.160374 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:19Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.163934 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.164014 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.164034 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.164061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.164080 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:19Z","lastTransitionTime":"2025-11-25T19:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.180851 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:19Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.267102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.267181 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.267210 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.267251 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.267281 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:19Z","lastTransitionTime":"2025-11-25T19:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.371368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.371420 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.371429 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.371450 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.371461 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:19Z","lastTransitionTime":"2025-11-25T19:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.474669 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.474717 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.474728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.474749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.474762 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:19Z","lastTransitionTime":"2025-11-25T19:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.577811 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.577882 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.577900 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.577925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.577944 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:19Z","lastTransitionTime":"2025-11-25T19:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.681613 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.681723 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.681749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.681784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.681813 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:19Z","lastTransitionTime":"2025-11-25T19:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.785555 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.785680 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.785734 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.785774 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.785801 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:19Z","lastTransitionTime":"2025-11-25T19:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.846448 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:19 crc kubenswrapper[4775]: E1125 19:34:19.846688 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.889071 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.889147 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.889174 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.889208 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.889233 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:19Z","lastTransitionTime":"2025-11-25T19:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.992769 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.992825 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.992839 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.992860 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:19 crc kubenswrapper[4775]: I1125 19:34:19.992872 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:19Z","lastTransitionTime":"2025-11-25T19:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.097118 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.097235 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.097255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.097285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.097306 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:20Z","lastTransitionTime":"2025-11-25T19:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.201016 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.201094 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.201120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.201159 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.201181 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:20Z","lastTransitionTime":"2025-11-25T19:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.304204 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.304285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.304297 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.304322 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.304335 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:20Z","lastTransitionTime":"2025-11-25T19:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.407469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.407532 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.407551 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.407578 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.407595 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:20Z","lastTransitionTime":"2025-11-25T19:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.511789 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.511858 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.511875 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.511905 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.511924 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:20Z","lastTransitionTime":"2025-11-25T19:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.614735 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.614813 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.614832 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.614860 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.614885 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:20Z","lastTransitionTime":"2025-11-25T19:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.718168 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.718207 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.718216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.718232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.718243 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:20Z","lastTransitionTime":"2025-11-25T19:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.821970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.822034 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.822053 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.822078 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.822096 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:20Z","lastTransitionTime":"2025-11-25T19:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.846833 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.846981 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.846884 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:20 crc kubenswrapper[4775]: E1125 19:34:20.847146 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:20 crc kubenswrapper[4775]: E1125 19:34:20.847284 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:20 crc kubenswrapper[4775]: E1125 19:34:20.847422 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.925279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.925733 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.925921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.926097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:20 crc kubenswrapper[4775]: I1125 19:34:20.926256 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:20Z","lastTransitionTime":"2025-11-25T19:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.029795 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.030304 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.030448 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.030593 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.030771 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:21Z","lastTransitionTime":"2025-11-25T19:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.134785 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.134866 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.134887 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.134917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.134937 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:21Z","lastTransitionTime":"2025-11-25T19:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.238521 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.238595 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.238613 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.238641 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.238694 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:21Z","lastTransitionTime":"2025-11-25T19:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.341821 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.341902 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.341920 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.341948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.341967 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:21Z","lastTransitionTime":"2025-11-25T19:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.445097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.445176 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.445194 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.445221 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.445238 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:21Z","lastTransitionTime":"2025-11-25T19:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.548161 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.548237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.548254 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.548283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.548302 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:21Z","lastTransitionTime":"2025-11-25T19:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.651612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.651679 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.651689 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.651705 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.651718 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:21Z","lastTransitionTime":"2025-11-25T19:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.754503 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.754586 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.754605 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.754635 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.754691 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:21Z","lastTransitionTime":"2025-11-25T19:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.846624 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:21 crc kubenswrapper[4775]: E1125 19:34:21.846956 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.857923 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.857982 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.857999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.858021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.858039 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:21Z","lastTransitionTime":"2025-11-25T19:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.962156 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.962225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.962242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.962268 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:21 crc kubenswrapper[4775]: I1125 19:34:21.962289 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:21Z","lastTransitionTime":"2025-11-25T19:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.066007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.066075 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.066093 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.066120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.066139 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:22Z","lastTransitionTime":"2025-11-25T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.169177 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.169242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.169261 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.169284 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.169304 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:22Z","lastTransitionTime":"2025-11-25T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.192305 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:22 crc kubenswrapper[4775]: E1125 19:34:22.192580 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:22 crc kubenswrapper[4775]: E1125 19:34:22.192761 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs podName:f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f nodeName:}" failed. No retries permitted until 2025-11-25 19:34:30.192724659 +0000 UTC m=+52.109087055 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs") pod "network-metrics-daemon-69dvc" (UID: "f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.272165 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.272227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.272247 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.272272 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.272287 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:22Z","lastTransitionTime":"2025-11-25T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.376017 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.376085 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.376100 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.376128 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.376147 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:22Z","lastTransitionTime":"2025-11-25T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.479294 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.479362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.479387 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.479424 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.479446 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:22Z","lastTransitionTime":"2025-11-25T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.582590 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.582667 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.582708 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.582732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.582746 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:22Z","lastTransitionTime":"2025-11-25T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.686354 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.686412 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.686423 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.686445 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.686466 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:22Z","lastTransitionTime":"2025-11-25T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.790302 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.790379 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.790390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.790408 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.790423 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:22Z","lastTransitionTime":"2025-11-25T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.848384 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.848490 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:22 crc kubenswrapper[4775]: E1125 19:34:22.848628 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.848497 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:22 crc kubenswrapper[4775]: E1125 19:34:22.848850 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:22 crc kubenswrapper[4775]: E1125 19:34:22.849133 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.893639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.893771 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.893792 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.893825 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.893847 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:22Z","lastTransitionTime":"2025-11-25T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.998161 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.998248 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.998268 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.998295 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:22 crc kubenswrapper[4775]: I1125 19:34:22.998313 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:22Z","lastTransitionTime":"2025-11-25T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.101882 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.101967 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.101991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.102023 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.102043 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:23Z","lastTransitionTime":"2025-11-25T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.206378 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.206487 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.206514 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.206561 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.206609 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:23Z","lastTransitionTime":"2025-11-25T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.309746 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.309836 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.309861 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.309895 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.309921 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:23Z","lastTransitionTime":"2025-11-25T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.413060 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.413132 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.413144 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.413166 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.413179 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:23Z","lastTransitionTime":"2025-11-25T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.516530 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.516594 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.516614 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.516641 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.516690 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:23Z","lastTransitionTime":"2025-11-25T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.620327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.620397 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.620418 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.620451 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.620471 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:23Z","lastTransitionTime":"2025-11-25T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.724911 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.725002 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.725024 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.725063 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.725087 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:23Z","lastTransitionTime":"2025-11-25T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.827734 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.827809 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.827827 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.827860 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.827880 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:23Z","lastTransitionTime":"2025-11-25T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.846284 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:23 crc kubenswrapper[4775]: E1125 19:34:23.846554 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.931474 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.931554 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.931575 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.931603 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:23 crc kubenswrapper[4775]: I1125 19:34:23.931624 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:23Z","lastTransitionTime":"2025-11-25T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.035246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.035331 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.035352 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.035384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.035405 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:24Z","lastTransitionTime":"2025-11-25T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.139384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.139465 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.139486 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.139513 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.139534 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:24Z","lastTransitionTime":"2025-11-25T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.242586 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.242692 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.242722 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.242751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.242773 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:24Z","lastTransitionTime":"2025-11-25T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.346913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.346968 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.346981 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.347013 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.347030 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:24Z","lastTransitionTime":"2025-11-25T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.450171 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.450227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.450235 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.450250 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.450261 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:24Z","lastTransitionTime":"2025-11-25T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.554528 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.554634 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.554675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.554706 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.554724 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:24Z","lastTransitionTime":"2025-11-25T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.658057 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.658119 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.658134 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.658158 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.658172 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:24Z","lastTransitionTime":"2025-11-25T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.760764 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.760870 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.760896 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.760930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.760956 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:24Z","lastTransitionTime":"2025-11-25T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.846749 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.846869 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.846776 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:24 crc kubenswrapper[4775]: E1125 19:34:24.847048 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:24 crc kubenswrapper[4775]: E1125 19:34:24.847273 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:24 crc kubenswrapper[4775]: E1125 19:34:24.847462 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.863713 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.863766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.863786 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.863813 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.863834 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:24Z","lastTransitionTime":"2025-11-25T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.967535 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.967614 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.967639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.967702 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:24 crc kubenswrapper[4775]: I1125 19:34:24.967721 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:24Z","lastTransitionTime":"2025-11-25T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.070243 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.070307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.070331 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.070359 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.070376 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.173155 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.173265 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.173290 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.173328 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.173348 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.276918 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.277101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.277135 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.277172 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.277193 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.381128 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.381224 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.381246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.381272 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.381293 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.488206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.488248 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.488259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.488276 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.488286 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.489267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.489324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.489346 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.489367 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.489384 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: E1125 19:34:25.509510 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:25Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.515790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.515855 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.515880 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.515914 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.515941 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: E1125 19:34:25.537952 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:25Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.543675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.543890 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.544062 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.544230 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.544390 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: E1125 19:34:25.567123 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:25Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.574190 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.574270 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.574292 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.574320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.574341 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: E1125 19:34:25.597066 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:25Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.604157 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.604422 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.604579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.604796 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.604960 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: E1125 19:34:25.626402 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:25Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:25 crc kubenswrapper[4775]: E1125 19:34:25.626715 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.629228 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.629290 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.629308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.629336 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.629357 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.732595 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.732633 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.732671 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.732686 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.732695 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.836390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.836466 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.836486 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.836519 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.836541 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.846147 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:25 crc kubenswrapper[4775]: E1125 19:34:25.846304 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.939342 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.939400 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.939412 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.939433 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:25 crc kubenswrapper[4775]: I1125 19:34:25.939449 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:25Z","lastTransitionTime":"2025-11-25T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.043176 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.043236 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.043253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.043278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.043296 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:26Z","lastTransitionTime":"2025-11-25T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.145775 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.145830 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.145846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.145872 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.145889 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:26Z","lastTransitionTime":"2025-11-25T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.225097 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.235626 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.249252 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.249343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.249367 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.249403 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.249426 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:26Z","lastTransitionTime":"2025-11-25T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.250768 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.268460 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.312137 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.341100 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.352083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.352126 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.352136 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.352151 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.352161 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:26Z","lastTransitionTime":"2025-11-25T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.356132 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.367422 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.380339 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.391273 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.410933 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.432090 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:11Z\\\",\\\"message\\\":\\\"4:11.167546 6157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 19:34:11.167521 6157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 19:34:11.167558 6157 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.382527ms\\\\nI1125 19:34:11.167535 6157 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:11.167595 6157 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nF1125 19:34:11.167621 6157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.447307 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.455286 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.455317 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.455329 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.455347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.455362 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:26Z","lastTransitionTime":"2025-11-25T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.463011 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.480516 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.493352 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.510584 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.514071 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.526140 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.539394 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.554176 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.557964 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.557991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.558002 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.558019 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.558035 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:26Z","lastTransitionTime":"2025-11-25T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.571293 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.583142 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.596615 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.618722 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:11Z\\\",\\\"message\\\":\\\"4:11.167546 6157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 19:34:11.167521 6157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 19:34:11.167558 6157 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.382527ms\\\\nI1125 19:34:11.167535 6157 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:11.167595 6157 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nF1125 19:34:11.167621 6157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.633730 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.649280 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.660754 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.660814 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.660831 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.660858 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.660874 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:26Z","lastTransitionTime":"2025-11-25T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.666439 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.677559 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.694773 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.715541 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.740324 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.759802 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.764172 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.764240 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.764254 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.764274 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.764287 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:26Z","lastTransitionTime":"2025-11-25T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.777553 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.790609 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.809526 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:26Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.846418 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.846428 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:26 crc kubenswrapper[4775]: E1125 19:34:26.846564 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.846820 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:26 crc kubenswrapper[4775]: E1125 19:34:26.846906 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:26 crc kubenswrapper[4775]: E1125 19:34:26.847111 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.848258 4775 scope.go:117] "RemoveContainer" containerID="d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.867138 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.867208 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.867226 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.867254 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.867275 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:26Z","lastTransitionTime":"2025-11-25T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.976256 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.976347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.976362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.976385 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:26 crc kubenswrapper[4775]: I1125 19:34:26.976402 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:26Z","lastTransitionTime":"2025-11-25T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.079419 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.079457 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.079467 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.079483 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.079493 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:27Z","lastTransitionTime":"2025-11-25T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.181676 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.181714 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.181728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.181746 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.181757 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:27Z","lastTransitionTime":"2025-11-25T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.259963 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/1.log" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.263948 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerStarted","Data":"70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e"} Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.264555 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.285440 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.285482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.285492 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.285510 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.285522 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:27Z","lastTransitionTime":"2025-11-25T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.290837 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.309479 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.329973 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.358417 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.376031 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.388460 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.388510 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.388525 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.388547 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.388564 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:27Z","lastTransitionTime":"2025-11-25T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.394565 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.418451 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.435881 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.455169 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.474141 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.486369 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.490767 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.490796 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.490806 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.490823 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.490832 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:27Z","lastTransitionTime":"2025-11-25T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.498723 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.511478 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.522258 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.536419 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.555122 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:11Z\\\",\\\"message\\\":\\\"4:11.167546 6157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 19:34:11.167521 6157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 19:34:11.167558 6157 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.382527ms\\\\nI1125 19:34:11.167535 6157 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:11.167595 6157 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nF1125 19:34:11.167621 6157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.569825 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:27Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.594044 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.594089 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.594101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.594120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.594134 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:27Z","lastTransitionTime":"2025-11-25T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.697225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.697266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.697275 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.697291 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.697301 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:27Z","lastTransitionTime":"2025-11-25T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.800412 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.800465 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.800477 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.800520 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.800535 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:27Z","lastTransitionTime":"2025-11-25T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.846586 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:27 crc kubenswrapper[4775]: E1125 19:34:27.846769 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.903031 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.903140 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.903163 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.903193 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:27 crc kubenswrapper[4775]: I1125 19:34:27.903219 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:27Z","lastTransitionTime":"2025-11-25T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.006330 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.006407 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.006425 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.006452 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.006475 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:28Z","lastTransitionTime":"2025-11-25T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.110088 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.110181 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.110199 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.110233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.110256 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:28Z","lastTransitionTime":"2025-11-25T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.214286 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.214366 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.214386 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.214418 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.214442 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:28Z","lastTransitionTime":"2025-11-25T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.270833 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/2.log" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.272074 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/1.log" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.277264 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerID="70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e" exitCode=1 Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.277323 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e"} Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.277374 4775 scope.go:117] "RemoveContainer" containerID="d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.278917 4775 scope.go:117] "RemoveContainer" containerID="70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e" Nov 25 19:34:28 crc kubenswrapper[4775]: E1125 19:34:28.284450 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.308060 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.317586 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.317693 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.317718 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.317751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.317774 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:28Z","lastTransitionTime":"2025-11-25T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.331954 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.356538 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.372232 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.393761 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.411416 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.421495 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.421573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.421592 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.421621 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.421640 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:28Z","lastTransitionTime":"2025-11-25T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.432807 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.449973 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.463860 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.484409 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.510301 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:11Z\\\",\\\"message\\\":\\\"4:11.167546 6157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 19:34:11.167521 6157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 19:34:11.167558 6157 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.382527ms\\\\nI1125 19:34:11.167535 6157 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:11.167595 6157 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nF1125 19:34:11.167621 6157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:27Z\\\",\\\"message\\\":\\\"r/openshift-kube-scheduler-crc in node crc\\\\nI1125 19:34:27.908090 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI1125 19:34:27.908100 6431 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1125 19:34:27.907799 6431 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1125 19:34:27.908139 6431 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1125 19:34:27.908155 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1125 19:34:27.908163 6431 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1125 19:34:27.908140 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.523631 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.523713 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.523731 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.523755 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.523772 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:28Z","lastTransitionTime":"2025-11-25T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.531958 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.550206 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.569518 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.584148 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.609740 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.626676 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.626758 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.626778 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.626806 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.626825 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:28Z","lastTransitionTime":"2025-11-25T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.631218 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.730259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.730323 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.730344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.730369 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.730386 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:28Z","lastTransitionTime":"2025-11-25T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.834025 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.834105 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.834124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.834151 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.834168 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:28Z","lastTransitionTime":"2025-11-25T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.846690 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:28 crc kubenswrapper[4775]: E1125 19:34:28.846939 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.847078 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.847103 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:28 crc kubenswrapper[4775]: E1125 19:34:28.847370 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:28 crc kubenswrapper[4775]: E1125 19:34:28.847534 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.877083 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6fa4dd3a1332505f2474434ce7a33db50b0f4042602b63d6d339dce39ae3f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:11Z\\\",\\\"message\\\":\\\"4:11.167546 6157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 19:34:11.167521 6157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 19:34:11.167558 6157 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.382527ms\\\\nI1125 19:34:11.167535 6157 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 19:34:11.167595 6157 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nF1125 19:34:11.167621 6157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:27Z\\\",\\\"message\\\":\\\"r/openshift-kube-scheduler-crc in node crc\\\\nI1125 19:34:27.908090 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI1125 19:34:27.908100 6431 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1125 19:34:27.907799 6431 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1125 19:34:27.908139 6431 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1125 19:34:27.908155 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1125 19:34:27.908163 6431 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1125 19:34:27.908140 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.891557 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.917910 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.936934 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.936980 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.936993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.937012 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.937026 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:28Z","lastTransitionTime":"2025-11-25T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.940459 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.959571 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:28 crc kubenswrapper[4775]: I1125 19:34:28.976881 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.001555 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:28Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.019723 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.039998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.040067 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.040088 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.040119 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.040141 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:29Z","lastTransitionTime":"2025-11-25T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.041485 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.061566 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.081354 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.103544 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.124541 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.143924 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.143976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.143990 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.144011 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.144049 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:29Z","lastTransitionTime":"2025-11-25T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.153892 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.176453 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.196150 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.218729 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.247704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.247778 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.247797 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.247825 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.247844 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:29Z","lastTransitionTime":"2025-11-25T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.284703 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/2.log" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.291517 4775 scope.go:117] "RemoveContainer" containerID="70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e" Nov 25 19:34:29 crc kubenswrapper[4775]: E1125 19:34:29.291836 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.317090 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.337708 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.352073 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.352149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.352165 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.352194 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.352216 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:29Z","lastTransitionTime":"2025-11-25T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.364132 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.383635 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.406282 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.429462 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.450729 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.455454 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.455525 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.455545 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.455578 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.455604 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:29Z","lastTransitionTime":"2025-11-25T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.469434 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.496339 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.526448 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:27Z\\\",\\\"message\\\":\\\"r/openshift-kube-scheduler-crc in node crc\\\\nI1125 19:34:27.908090 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI1125 19:34:27.908100 6431 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1125 19:34:27.907799 6431 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1125 19:34:27.908139 6431 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1125 19:34:27.908155 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1125 19:34:27.908163 6431 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1125 19:34:27.908140 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.547532 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.558767 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.558834 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.558863 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.558898 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.558923 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:29Z","lastTransitionTime":"2025-11-25T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.571247 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.593254 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.609213 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.625860 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.648981 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.662518 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.662751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.662815 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.662879 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.662991 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:29Z","lastTransitionTime":"2025-11-25T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.664102 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:29Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.766519 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.766567 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.766578 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.766598 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.766610 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:29Z","lastTransitionTime":"2025-11-25T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.846791 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:29 crc kubenswrapper[4775]: E1125 19:34:29.847374 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.869197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.869267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.869285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.869311 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.869329 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:29Z","lastTransitionTime":"2025-11-25T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.971823 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.971857 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.971868 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.971882 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:29 crc kubenswrapper[4775]: I1125 19:34:29.971893 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:29Z","lastTransitionTime":"2025-11-25T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.074794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.074866 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.074892 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.074923 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.074946 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:30Z","lastTransitionTime":"2025-11-25T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.177228 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.177269 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.177279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.177295 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.177306 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:30Z","lastTransitionTime":"2025-11-25T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.280422 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.280527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.280552 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.280584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.280602 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:30Z","lastTransitionTime":"2025-11-25T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.289111 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.289312 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.289381 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs podName:f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f nodeName:}" failed. No retries permitted until 2025-11-25 19:34:46.289364065 +0000 UTC m=+68.205726441 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs") pod "network-metrics-daemon-69dvc" (UID: "f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.383991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.384094 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.384115 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.384142 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.384162 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:30Z","lastTransitionTime":"2025-11-25T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.486890 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.486943 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.486962 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.486988 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.487007 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:30Z","lastTransitionTime":"2025-11-25T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.592723 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.592764 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.592776 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.592795 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.592808 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:30Z","lastTransitionTime":"2025-11-25T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.693915 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.693980 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.694007 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.694035 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.694081 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.694202 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.694218 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.694230 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.694238 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.694295 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.694312 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.694270 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 19:35:02.694257001 +0000 UTC m=+84.610619367 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.694420 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 19:35:02.694394285 +0000 UTC m=+84.610756861 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.694532 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.694568 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:35:02.69455859 +0000 UTC m=+84.610920966 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.694620 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.694690 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:35:02.694675793 +0000 UTC m=+84.611038159 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.694819 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:35:02.694785866 +0000 UTC m=+84.611148232 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.695833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.695863 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.695873 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.695901 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.695913 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:30Z","lastTransitionTime":"2025-11-25T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.799328 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.799793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.799806 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.799831 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.799844 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:30Z","lastTransitionTime":"2025-11-25T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.846852 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.847288 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.847394 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.847375 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.847572 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:30 crc kubenswrapper[4775]: E1125 19:34:30.847783 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.903363 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.903446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.903471 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.903507 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:30 crc kubenswrapper[4775]: I1125 19:34:30.903528 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:30Z","lastTransitionTime":"2025-11-25T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.006361 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.006475 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.006498 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.006528 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.006550 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:31Z","lastTransitionTime":"2025-11-25T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.109502 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.109577 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.109598 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.109628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.109683 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:31Z","lastTransitionTime":"2025-11-25T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.213216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.213303 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.213318 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.213342 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.213354 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:31Z","lastTransitionTime":"2025-11-25T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.316576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.316624 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.316637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.316686 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.316707 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:31Z","lastTransitionTime":"2025-11-25T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.420069 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.420125 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.420142 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.420165 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.420180 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:31Z","lastTransitionTime":"2025-11-25T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.523398 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.523518 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.523538 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.523564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.523588 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:31Z","lastTransitionTime":"2025-11-25T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.627412 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.627489 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.627508 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.627544 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.627567 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:31Z","lastTransitionTime":"2025-11-25T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.733783 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.733887 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.733920 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.733958 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.734001 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:31Z","lastTransitionTime":"2025-11-25T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.837994 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.838069 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.838088 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.838119 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.838139 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:31Z","lastTransitionTime":"2025-11-25T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.846351 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:31 crc kubenswrapper[4775]: E1125 19:34:31.846545 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.941587 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.941710 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.941733 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.941761 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:31 crc kubenswrapper[4775]: I1125 19:34:31.941780 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:31Z","lastTransitionTime":"2025-11-25T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.045787 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.045880 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.045911 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.045945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.045970 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:32Z","lastTransitionTime":"2025-11-25T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.148966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.149368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.149553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.149751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.149970 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:32Z","lastTransitionTime":"2025-11-25T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.253166 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.253566 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.253799 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.254036 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.254172 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:32Z","lastTransitionTime":"2025-11-25T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.357994 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.358083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.358109 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.358147 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.358166 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:32Z","lastTransitionTime":"2025-11-25T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.461996 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.462063 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.462080 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.462109 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.462128 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:32Z","lastTransitionTime":"2025-11-25T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.565234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.565308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.565326 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.565390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.565434 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:32Z","lastTransitionTime":"2025-11-25T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.669331 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.669404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.669429 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.669463 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.669488 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:32Z","lastTransitionTime":"2025-11-25T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.773505 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.773590 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.773610 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.773644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.773711 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:32Z","lastTransitionTime":"2025-11-25T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.846995 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.847063 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.847070 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:32 crc kubenswrapper[4775]: E1125 19:34:32.847210 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:32 crc kubenswrapper[4775]: E1125 19:34:32.847411 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:32 crc kubenswrapper[4775]: E1125 19:34:32.847633 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.876548 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.876623 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.876673 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.876704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.876726 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:32Z","lastTransitionTime":"2025-11-25T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.980013 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.980083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.980103 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.980128 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:32 crc kubenswrapper[4775]: I1125 19:34:32.980159 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:32Z","lastTransitionTime":"2025-11-25T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.083939 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.084002 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.084020 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.084047 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.084066 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:33Z","lastTransitionTime":"2025-11-25T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.188475 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.188552 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.188571 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.188603 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.188625 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:33Z","lastTransitionTime":"2025-11-25T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.292866 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.292929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.292949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.292980 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.293003 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:33Z","lastTransitionTime":"2025-11-25T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.396709 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.396795 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.396813 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.396843 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.396866 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:33Z","lastTransitionTime":"2025-11-25T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.500217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.500294 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.500317 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.500344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.500364 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:33Z","lastTransitionTime":"2025-11-25T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.604182 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.604256 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.604273 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.604303 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.604321 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:33Z","lastTransitionTime":"2025-11-25T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.706925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.707004 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.707033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.707066 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.707098 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:33Z","lastTransitionTime":"2025-11-25T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.810733 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.810797 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.810820 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.810846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.810866 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:33Z","lastTransitionTime":"2025-11-25T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.846735 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:33 crc kubenswrapper[4775]: E1125 19:34:33.847000 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.914486 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.914540 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.914551 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.914570 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:33 crc kubenswrapper[4775]: I1125 19:34:33.914584 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:33Z","lastTransitionTime":"2025-11-25T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.017906 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.017986 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.018003 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.018030 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.018047 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:34Z","lastTransitionTime":"2025-11-25T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.121979 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.122052 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.122071 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.122096 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.122131 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:34Z","lastTransitionTime":"2025-11-25T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.225876 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.225974 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.225994 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.226020 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.226037 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:34Z","lastTransitionTime":"2025-11-25T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.329876 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.329956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.329975 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.330005 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.330023 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:34Z","lastTransitionTime":"2025-11-25T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.433618 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.433736 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.433756 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.433785 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.433807 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:34Z","lastTransitionTime":"2025-11-25T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.536855 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.536934 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.536957 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.536994 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.537021 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:34Z","lastTransitionTime":"2025-11-25T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.640258 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.640318 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.640334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.640356 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.640378 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:34Z","lastTransitionTime":"2025-11-25T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.744436 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.744542 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.744577 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.744613 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.744637 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:34Z","lastTransitionTime":"2025-11-25T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.846499 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.846556 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.846761 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:34 crc kubenswrapper[4775]: E1125 19:34:34.846979 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:34 crc kubenswrapper[4775]: E1125 19:34:34.847370 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:34 crc kubenswrapper[4775]: E1125 19:34:34.847219 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.848241 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.848306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.848330 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.848358 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.848378 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:34Z","lastTransitionTime":"2025-11-25T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.952131 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.952197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.952216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.952238 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:34 crc kubenswrapper[4775]: I1125 19:34:34.952255 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:34Z","lastTransitionTime":"2025-11-25T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.055510 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.055618 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.055637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.055703 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.055725 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.158851 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.158926 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.158945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.158973 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.158992 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.262348 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.262421 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.262441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.262481 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.262503 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.365828 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.365899 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.365917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.365945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.365964 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.469361 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.469441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.469459 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.469488 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.469506 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.572596 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.572704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.572724 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.572751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.572783 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.660248 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.660301 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.660312 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.660334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.660347 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: E1125 19:34:35.679823 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:35Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.686389 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.686473 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.686496 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.686526 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.686547 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: E1125 19:34:35.706902 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:35Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.712400 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.712450 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.712463 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.712485 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.712499 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: E1125 19:34:35.736108 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:35Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.741558 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.741636 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.741685 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.741716 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.741735 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: E1125 19:34:35.759712 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:35Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.766638 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.766751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.766779 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.766816 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.766842 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: E1125 19:34:35.785407 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:35Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:35 crc kubenswrapper[4775]: E1125 19:34:35.785641 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.788343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.788416 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.788437 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.788470 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.788493 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.846707 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:35 crc kubenswrapper[4775]: E1125 19:34:35.846965 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.893369 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.893433 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.893444 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.893466 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.893484 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.998188 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.998257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.998276 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.998302 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:35 crc kubenswrapper[4775]: I1125 19:34:35.998321 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:35Z","lastTransitionTime":"2025-11-25T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.102215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.102286 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.102307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.102337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.102359 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:36Z","lastTransitionTime":"2025-11-25T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.205926 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.206215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.206240 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.206280 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.206306 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:36Z","lastTransitionTime":"2025-11-25T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.310442 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.310499 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.310512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.310534 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.310550 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:36Z","lastTransitionTime":"2025-11-25T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.413904 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.413952 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.413963 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.413982 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.413998 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:36Z","lastTransitionTime":"2025-11-25T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.517767 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.517837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.517875 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.517909 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.517938 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:36Z","lastTransitionTime":"2025-11-25T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.622091 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.622160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.622181 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.622215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.622234 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:36Z","lastTransitionTime":"2025-11-25T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.726210 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.726282 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.726299 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.726324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.726342 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:36Z","lastTransitionTime":"2025-11-25T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.830227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.830290 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.830307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.830332 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.830350 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:36Z","lastTransitionTime":"2025-11-25T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.846915 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.847118 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:36 crc kubenswrapper[4775]: E1125 19:34:36.847233 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:36 crc kubenswrapper[4775]: E1125 19:34:36.847400 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.847768 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:36 crc kubenswrapper[4775]: E1125 19:34:36.847922 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.933484 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.933539 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.933556 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.933582 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:36 crc kubenswrapper[4775]: I1125 19:34:36.933605 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:36Z","lastTransitionTime":"2025-11-25T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.036727 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.036795 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.036818 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.036850 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.036877 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:37Z","lastTransitionTime":"2025-11-25T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.140376 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.140453 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.140471 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.140490 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.140505 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:37Z","lastTransitionTime":"2025-11-25T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.243862 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.243912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.243925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.243946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.243959 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:37Z","lastTransitionTime":"2025-11-25T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.347336 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.347408 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.347433 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.347463 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.347481 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:37Z","lastTransitionTime":"2025-11-25T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.451021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.451090 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.451109 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.451138 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.451157 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:37Z","lastTransitionTime":"2025-11-25T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.554923 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.554994 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.555007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.555025 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.555039 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:37Z","lastTransitionTime":"2025-11-25T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.658811 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.658861 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.658871 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.658892 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.658903 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:37Z","lastTransitionTime":"2025-11-25T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.761858 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.761946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.761966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.762001 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.762028 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:37Z","lastTransitionTime":"2025-11-25T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.847100 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:37 crc kubenswrapper[4775]: E1125 19:34:37.847444 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.864855 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.865074 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.865248 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.865415 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.865572 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:37Z","lastTransitionTime":"2025-11-25T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.969439 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.969517 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.969530 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.969556 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:37 crc kubenswrapper[4775]: I1125 19:34:37.969570 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:37Z","lastTransitionTime":"2025-11-25T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.073139 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.073197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.073208 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.073227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.073237 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:38Z","lastTransitionTime":"2025-11-25T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.176247 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.176311 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.176333 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.176361 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.176381 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:38Z","lastTransitionTime":"2025-11-25T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.278947 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.278995 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.279008 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.279029 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.279043 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:38Z","lastTransitionTime":"2025-11-25T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.382486 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.382560 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.382578 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.382607 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.382628 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:38Z","lastTransitionTime":"2025-11-25T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.487066 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.487162 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.487182 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.487216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.487243 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:38Z","lastTransitionTime":"2025-11-25T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.590810 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.590871 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.590886 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.590910 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.590924 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:38Z","lastTransitionTime":"2025-11-25T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.694599 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.694705 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.694729 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.694759 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.694781 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:38Z","lastTransitionTime":"2025-11-25T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.797710 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.797836 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.797855 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.797884 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.797910 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:38Z","lastTransitionTime":"2025-11-25T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.846381 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:38 crc kubenswrapper[4775]: E1125 19:34:38.846545 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.847053 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.847053 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:38 crc kubenswrapper[4775]: E1125 19:34:38.847319 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:38 crc kubenswrapper[4775]: E1125 19:34:38.847411 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.866150 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:38Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.882260 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:38Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.901845 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.901953 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.901966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.901989 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.902007 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:38Z","lastTransitionTime":"2025-11-25T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.903239 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:38Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.924644 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:38Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.944674 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:38Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.964397 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:38Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:38 crc kubenswrapper[4775]: I1125 19:34:38.983691 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:38Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.004286 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.004364 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.004390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.004421 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.004442 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:39Z","lastTransitionTime":"2025-11-25T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.005813 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:39Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.024632 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:39Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.040919 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:39Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.055340 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:39Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.070579 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:39Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.088440 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:39Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.106946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.107058 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.107081 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.107107 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.107125 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:39Z","lastTransitionTime":"2025-11-25T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.108572 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:39Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.132833 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:39Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.167321 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:27Z\\\",\\\"message\\\":\\\"r/openshift-kube-scheduler-crc in node crc\\\\nI1125 19:34:27.908090 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI1125 19:34:27.908100 6431 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1125 19:34:27.907799 6431 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1125 19:34:27.908139 6431 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1125 19:34:27.908155 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1125 19:34:27.908163 6431 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1125 19:34:27.908140 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:39Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.187069 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:39Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.209857 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.209919 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.209933 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.209956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.209972 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:39Z","lastTransitionTime":"2025-11-25T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.313124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.313183 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.313207 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.313233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.313256 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:39Z","lastTransitionTime":"2025-11-25T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.417129 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.417217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.417236 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.417267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.417289 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:39Z","lastTransitionTime":"2025-11-25T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.520352 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.520436 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.520465 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.520500 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.520521 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:39Z","lastTransitionTime":"2025-11-25T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.624126 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.624206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.624230 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.624262 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.624285 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:39Z","lastTransitionTime":"2025-11-25T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.727286 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.727343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.727360 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.727385 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.727405 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:39Z","lastTransitionTime":"2025-11-25T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.832118 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.832200 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.832221 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.832250 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.832272 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:39Z","lastTransitionTime":"2025-11-25T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.846607 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:39 crc kubenswrapper[4775]: E1125 19:34:39.847486 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.848058 4775 scope.go:117] "RemoveContainer" containerID="70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e" Nov 25 19:34:39 crc kubenswrapper[4775]: E1125 19:34:39.848474 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.936579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.936705 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.936724 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.936752 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:39 crc kubenswrapper[4775]: I1125 19:34:39.936771 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:39Z","lastTransitionTime":"2025-11-25T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.040483 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.040534 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.040544 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.040561 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.040574 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:40Z","lastTransitionTime":"2025-11-25T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.144547 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.145045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.145060 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.145081 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.145095 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:40Z","lastTransitionTime":"2025-11-25T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.249068 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.249148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.249168 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.249197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.249217 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:40Z","lastTransitionTime":"2025-11-25T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.353355 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.353421 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.353443 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.353470 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.353489 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:40Z","lastTransitionTime":"2025-11-25T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.457741 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.457813 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.457834 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.457859 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.457879 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:40Z","lastTransitionTime":"2025-11-25T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.561894 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.561975 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.561996 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.562025 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.562044 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:40Z","lastTransitionTime":"2025-11-25T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.665921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.665994 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.666013 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.666041 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.666060 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:40Z","lastTransitionTime":"2025-11-25T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.769336 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.769415 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.769435 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.769464 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.769483 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:40Z","lastTransitionTime":"2025-11-25T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.846317 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.846439 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.846480 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:40 crc kubenswrapper[4775]: E1125 19:34:40.846682 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:40 crc kubenswrapper[4775]: E1125 19:34:40.846812 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:40 crc kubenswrapper[4775]: E1125 19:34:40.847120 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.872574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.872627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.872637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.872672 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.872684 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:40Z","lastTransitionTime":"2025-11-25T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.976152 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.976217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.976232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.976256 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:40 crc kubenswrapper[4775]: I1125 19:34:40.976274 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:40Z","lastTransitionTime":"2025-11-25T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.080846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.081581 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.081607 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.081635 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.081677 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:41Z","lastTransitionTime":"2025-11-25T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.185247 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.185318 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.185345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.185375 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.185394 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:41Z","lastTransitionTime":"2025-11-25T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.289437 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.289518 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.289537 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.289566 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.289587 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:41Z","lastTransitionTime":"2025-11-25T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.393855 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.393950 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.393966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.393990 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.394019 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:41Z","lastTransitionTime":"2025-11-25T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.497497 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.497585 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.497604 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.497674 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.497695 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:41Z","lastTransitionTime":"2025-11-25T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.601602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.601708 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.601732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.601768 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.601792 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:41Z","lastTransitionTime":"2025-11-25T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.705868 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.705935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.705953 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.705981 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.706001 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:41Z","lastTransitionTime":"2025-11-25T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.810819 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.810904 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.810929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.810958 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.810978 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:41Z","lastTransitionTime":"2025-11-25T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.846318 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:41 crc kubenswrapper[4775]: E1125 19:34:41.846554 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.916465 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.916534 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.916554 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.916582 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:41 crc kubenswrapper[4775]: I1125 19:34:41.916603 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:41Z","lastTransitionTime":"2025-11-25T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.020463 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.020530 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.020551 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.020581 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.020603 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:42Z","lastTransitionTime":"2025-11-25T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.123741 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.123816 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.123838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.123864 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.123898 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:42Z","lastTransitionTime":"2025-11-25T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.228088 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.228164 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.228186 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.228217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.228242 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:42Z","lastTransitionTime":"2025-11-25T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.332398 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.332529 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.332552 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.332582 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.332604 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:42Z","lastTransitionTime":"2025-11-25T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.435631 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.435752 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.435781 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.435813 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.435839 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:42Z","lastTransitionTime":"2025-11-25T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.539487 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.539568 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.539593 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.539627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.539693 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:42Z","lastTransitionTime":"2025-11-25T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.643778 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.643862 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.643881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.643907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.643926 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:42Z","lastTransitionTime":"2025-11-25T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.758878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.758950 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.758970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.758999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.759018 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:42Z","lastTransitionTime":"2025-11-25T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.846570 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.846730 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.846794 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:42 crc kubenswrapper[4775]: E1125 19:34:42.847034 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:42 crc kubenswrapper[4775]: E1125 19:34:42.847192 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:42 crc kubenswrapper[4775]: E1125 19:34:42.847375 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.862451 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.862530 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.862557 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.862593 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.862622 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:42Z","lastTransitionTime":"2025-11-25T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.966749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.966836 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.966855 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.967287 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:42 crc kubenswrapper[4775]: I1125 19:34:42.967344 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:42Z","lastTransitionTime":"2025-11-25T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.070482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.070525 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.070536 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.070550 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.070561 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:43Z","lastTransitionTime":"2025-11-25T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.174927 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.174971 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.174981 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.174995 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.175006 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:43Z","lastTransitionTime":"2025-11-25T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.278601 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.278677 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.278687 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.278707 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.278723 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:43Z","lastTransitionTime":"2025-11-25T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.382114 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.382178 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.382194 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.382221 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.382238 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:43Z","lastTransitionTime":"2025-11-25T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.485081 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.485130 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.485140 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.485162 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.485174 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:43Z","lastTransitionTime":"2025-11-25T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.589118 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.589195 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.589205 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.589222 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.589232 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:43Z","lastTransitionTime":"2025-11-25T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.692574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.692638 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.692670 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.692692 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.692706 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:43Z","lastTransitionTime":"2025-11-25T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.795355 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.795463 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.795484 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.795515 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.795535 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:43Z","lastTransitionTime":"2025-11-25T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.847099 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:43 crc kubenswrapper[4775]: E1125 19:34:43.847355 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.899411 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.899464 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.899476 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.899494 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:43 crc kubenswrapper[4775]: I1125 19:34:43.899508 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:43Z","lastTransitionTime":"2025-11-25T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.002571 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.002625 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.002636 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.002670 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.002680 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:44Z","lastTransitionTime":"2025-11-25T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.105366 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.105425 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.105437 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.105459 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.105472 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:44Z","lastTransitionTime":"2025-11-25T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.208584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.208893 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.209057 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.209190 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.209328 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:44Z","lastTransitionTime":"2025-11-25T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.312897 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.313176 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.313240 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.313304 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.313368 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:44Z","lastTransitionTime":"2025-11-25T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.416226 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.416555 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.416878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.417162 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.417346 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:44Z","lastTransitionTime":"2025-11-25T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.520967 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.521023 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.521033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.521051 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.521064 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:44Z","lastTransitionTime":"2025-11-25T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.624078 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.624361 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.624460 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.624574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.624697 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:44Z","lastTransitionTime":"2025-11-25T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.728846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.728949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.728970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.729021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.729042 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:44Z","lastTransitionTime":"2025-11-25T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.832539 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.832626 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.832669 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.832696 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.832716 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:44Z","lastTransitionTime":"2025-11-25T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.846927 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.847026 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.847120 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:44 crc kubenswrapper[4775]: E1125 19:34:44.847230 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:44 crc kubenswrapper[4775]: E1125 19:34:44.847338 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:44 crc kubenswrapper[4775]: E1125 19:34:44.847524 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.936481 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.936549 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.936575 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.936603 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:44 crc kubenswrapper[4775]: I1125 19:34:44.936624 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:44Z","lastTransitionTime":"2025-11-25T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.039750 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.039820 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.039838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.039868 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.039886 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:45Z","lastTransitionTime":"2025-11-25T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.142572 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.142686 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.142714 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.142744 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.142765 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:45Z","lastTransitionTime":"2025-11-25T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.247092 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.247125 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.247137 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.247152 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.247163 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:45Z","lastTransitionTime":"2025-11-25T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.349669 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.349731 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.349743 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.349768 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.349783 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:45Z","lastTransitionTime":"2025-11-25T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.453627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.453959 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.454251 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.454438 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.454615 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:45Z","lastTransitionTime":"2025-11-25T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.558857 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.558930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.558949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.558977 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.558998 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:45Z","lastTransitionTime":"2025-11-25T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.663193 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.663270 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.663294 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.663326 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.663347 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:45Z","lastTransitionTime":"2025-11-25T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.766704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.766771 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.766788 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.766816 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.766836 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:45Z","lastTransitionTime":"2025-11-25T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.847130 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:45 crc kubenswrapper[4775]: E1125 19:34:45.847374 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.862067 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.870601 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.870696 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.870714 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.870739 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.870758 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:45Z","lastTransitionTime":"2025-11-25T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.974295 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.974382 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.974407 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.974441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:45 crc kubenswrapper[4775]: I1125 19:34:45.974467 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:45Z","lastTransitionTime":"2025-11-25T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.078268 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.079016 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.079076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.079111 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.079143 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.149771 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.149819 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.149829 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.149843 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.149857 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: E1125 19:34:46.161330 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:46Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.166308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.166374 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.166398 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.166427 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.166451 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: E1125 19:34:46.182237 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:46Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.187824 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.187870 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.187890 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.187913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.187930 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: E1125 19:34:46.201868 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:46Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.207494 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.207529 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.207541 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.207557 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.207569 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: E1125 19:34:46.227573 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:46Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.233357 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.233424 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.233445 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.233474 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.233495 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: E1125 19:34:46.250734 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:46Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:46 crc kubenswrapper[4775]: E1125 19:34:46.250967 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.253173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.253220 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.253234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.253258 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.253273 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.306235 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:46 crc kubenswrapper[4775]: E1125 19:34:46.306450 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:46 crc kubenswrapper[4775]: E1125 19:34:46.306575 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs podName:f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f nodeName:}" failed. No retries permitted until 2025-11-25 19:35:18.306550737 +0000 UTC m=+100.222913103 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs") pod "network-metrics-daemon-69dvc" (UID: "f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.355948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.356000 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.356018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.356043 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.356063 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.459136 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.459199 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.459214 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.459234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.459247 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.562424 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.562479 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.562493 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.562518 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.562538 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.665948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.665994 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.666007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.666024 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.666037 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.769315 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.769364 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.769375 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.769401 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.769413 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.846549 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.846643 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.846644 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:46 crc kubenswrapper[4775]: E1125 19:34:46.846808 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:46 crc kubenswrapper[4775]: E1125 19:34:46.847029 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:46 crc kubenswrapper[4775]: E1125 19:34:46.847246 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.872334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.872418 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.872441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.872869 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.873097 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.976736 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.976793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.976806 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.976827 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:46 crc kubenswrapper[4775]: I1125 19:34:46.976842 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:46Z","lastTransitionTime":"2025-11-25T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.079327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.079382 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.079393 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.079407 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.079418 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:47Z","lastTransitionTime":"2025-11-25T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.182482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.182576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.182669 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.182694 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.182710 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:47Z","lastTransitionTime":"2025-11-25T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.286713 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.286769 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.286779 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.286796 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.286807 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:47Z","lastTransitionTime":"2025-11-25T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.389724 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.389779 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.389796 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.389816 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.389830 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:47Z","lastTransitionTime":"2025-11-25T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.492856 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.492899 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.492910 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.492928 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.492940 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:47Z","lastTransitionTime":"2025-11-25T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.596310 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.596360 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.596372 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.596389 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.596403 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:47Z","lastTransitionTime":"2025-11-25T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.699760 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.699847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.699867 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.699901 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.699920 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:47Z","lastTransitionTime":"2025-11-25T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.802606 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.802710 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.802734 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.802767 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.802789 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:47Z","lastTransitionTime":"2025-11-25T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.846513 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:47 crc kubenswrapper[4775]: E1125 19:34:47.846705 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.906513 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.906597 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.906609 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.906629 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:47 crc kubenswrapper[4775]: I1125 19:34:47.906642 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:47Z","lastTransitionTime":"2025-11-25T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.009986 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.010069 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.010095 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.010128 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.010154 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:48Z","lastTransitionTime":"2025-11-25T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.113138 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.113214 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.113239 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.113270 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.113295 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:48Z","lastTransitionTime":"2025-11-25T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.217246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.218017 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.218122 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.218211 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.218319 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:48Z","lastTransitionTime":"2025-11-25T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.321414 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.321488 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.321508 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.321535 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.321554 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:48Z","lastTransitionTime":"2025-11-25T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.366191 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qf2w_850f083c-ad86-47bb-8fd1-4f2a4a9e7831/kube-multus/0.log" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.366265 4775 generic.go:334] "Generic (PLEG): container finished" podID="850f083c-ad86-47bb-8fd1-4f2a4a9e7831" containerID="cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079" exitCode=1 Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.366312 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qf2w" event={"ID":"850f083c-ad86-47bb-8fd1-4f2a4a9e7831","Type":"ContainerDied","Data":"cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079"} Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.366879 4775 scope.go:117] "RemoveContainer" containerID="cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.386764 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.410497 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:27Z\\\",\\\"message\\\":\\\"r/openshift-kube-scheduler-crc in node crc\\\\nI1125 19:34:27.908090 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI1125 19:34:27.908100 6431 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1125 19:34:27.907799 6431 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1125 19:34:27.908139 6431 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1125 19:34:27.908155 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1125 19:34:27.908163 6431 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1125 19:34:27.908140 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.424624 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.424684 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.424702 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.424725 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.424739 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:48Z","lastTransitionTime":"2025-11-25T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.428947 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.446877 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.462711 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.478505 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.497936 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.514040 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.527818 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.527857 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.527869 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.527887 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.527901 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:48Z","lastTransitionTime":"2025-11-25T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.533046 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.546431 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.562037 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2cea07-f9ff-405f-a2cc-3bc0b329faba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad64a22adbab6e2dfb0a2b3491957bf199625f65eb944136f9e74100ca4323a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.577187 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.592632 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.610877 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:47Z\\\",\\\"message\\\":\\\"2025-11-25T19:34:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415\\\\n2025-11-25T19:34:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415 to /host/opt/cni/bin/\\\\n2025-11-25T19:34:02Z [verbose] multus-daemon started\\\\n2025-11-25T19:34:02Z [verbose] Readiness Indicator file check\\\\n2025-11-25T19:34:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.631888 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.632192 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.632449 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.633226 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.633355 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:48Z","lastTransitionTime":"2025-11-25T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.633846 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.653340 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.668869 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.682491 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.737101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.737160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.737174 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.737196 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.737211 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:48Z","lastTransitionTime":"2025-11-25T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.840914 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.840971 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.840987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.841012 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.841030 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:48Z","lastTransitionTime":"2025-11-25T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.846232 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:48 crc kubenswrapper[4775]: E1125 19:34:48.846380 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.846599 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:48 crc kubenswrapper[4775]: E1125 19:34:48.846725 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.847028 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:48 crc kubenswrapper[4775]: E1125 19:34:48.847146 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.866716 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.885177 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.905329 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.924377 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.943921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.944625 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.944749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.944864 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.944948 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:48Z","lastTransitionTime":"2025-11-25T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.946019 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.961699 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.976274 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2cea07-f9ff-405f-a2cc-3bc0b329faba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad64a22adbab6e2dfb0a2b3491957bf199625f65eb944136f9e74100ca4323a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:48 crc kubenswrapper[4775]: I1125 19:34:48.991045 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:48Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.005183 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.022169 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:47Z\\\",\\\"message\\\":\\\"2025-11-25T19:34:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415\\\\n2025-11-25T19:34:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415 to /host/opt/cni/bin/\\\\n2025-11-25T19:34:02Z [verbose] multus-daemon started\\\\n2025-11-25T19:34:02Z [verbose] Readiness Indicator file check\\\\n2025-11-25T19:34:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.039863 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.048152 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.048194 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.048210 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.048233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.048251 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:49Z","lastTransitionTime":"2025-11-25T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.060361 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.076523 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.093113 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.107968 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.126971 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.151208 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.151267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.151286 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.151312 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.151333 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:49Z","lastTransitionTime":"2025-11-25T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.152966 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:27Z\\\",\\\"message\\\":\\\"r/openshift-kube-scheduler-crc in node crc\\\\nI1125 19:34:27.908090 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI1125 19:34:27.908100 6431 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1125 19:34:27.907799 6431 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1125 19:34:27.908139 6431 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1125 19:34:27.908155 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1125 19:34:27.908163 6431 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1125 19:34:27.908140 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.170549 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.254638 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.254772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.254800 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.254834 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.254859 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:49Z","lastTransitionTime":"2025-11-25T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.359242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.359334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.359359 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.359394 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.359419 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:49Z","lastTransitionTime":"2025-11-25T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.372564 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qf2w_850f083c-ad86-47bb-8fd1-4f2a4a9e7831/kube-multus/0.log" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.372681 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qf2w" event={"ID":"850f083c-ad86-47bb-8fd1-4f2a4a9e7831","Type":"ContainerStarted","Data":"0214a60a160bcf831db4a80d10761356a50ea831420fe32966eb42ba3de54426"} Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.390970 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0214a60a160bcf831db4a80d10761356a50ea831420fe32966eb42ba3de54426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:47Z\\\",\\\"message\\\":\\\"2025-11-25T19:34:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415\\\\n2025-11-25T19:34:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415 to /host/opt/cni/bin/\\\\n2025-11-25T19:34:02Z [verbose] multus-daemon started\\\\n2025-11-25T19:34:02Z [verbose] Readiness Indicator file check\\\\n2025-11-25T19:34:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.409131 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.432018 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2cea07-f9ff-405f-a2cc-3bc0b329faba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad64a22adbab6e2dfb0a2b3491957bf199625f65eb944136f9e74100ca4323a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.462542 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.462575 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.462584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.462599 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.462610 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:49Z","lastTransitionTime":"2025-11-25T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.473204 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.503978 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.524961 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.538278 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.556589 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.565146 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.565198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.565209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.565225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.565235 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:49Z","lastTransitionTime":"2025-11-25T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.567716 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.584010 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.605319 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:27Z\\\",\\\"message\\\":\\\"r/openshift-kube-scheduler-crc in node crc\\\\nI1125 19:34:27.908090 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI1125 19:34:27.908100 6431 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1125 19:34:27.907799 6431 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1125 19:34:27.908139 6431 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1125 19:34:27.908155 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1125 19:34:27.908163 6431 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1125 19:34:27.908140 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.622512 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.648361 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.664937 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.668413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.668485 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.668505 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.668533 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.668551 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:49Z","lastTransitionTime":"2025-11-25T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.681551 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.701483 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.720378 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.740120 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:49Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.771519 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.771558 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.771567 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.771584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.771597 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:49Z","lastTransitionTime":"2025-11-25T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.847083 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:49 crc kubenswrapper[4775]: E1125 19:34:49.847360 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.874636 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.874712 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.874723 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.874741 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.874754 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:49Z","lastTransitionTime":"2025-11-25T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.977751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.977828 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.977846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.977873 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:49 crc kubenswrapper[4775]: I1125 19:34:49.977899 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:49Z","lastTransitionTime":"2025-11-25T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.080596 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.080718 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.080741 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.080767 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.080787 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:50Z","lastTransitionTime":"2025-11-25T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.184290 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.184365 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.184377 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.184397 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.184408 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:50Z","lastTransitionTime":"2025-11-25T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.287564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.287633 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.287683 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.287714 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.287732 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:50Z","lastTransitionTime":"2025-11-25T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.390476 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.390565 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.390591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.390627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.390691 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:50Z","lastTransitionTime":"2025-11-25T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.493916 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.494021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.494044 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.494074 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.494095 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:50Z","lastTransitionTime":"2025-11-25T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.597527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.597577 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.597588 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.597605 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.597617 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:50Z","lastTransitionTime":"2025-11-25T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.700233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.700304 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.700324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.700352 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.700372 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:50Z","lastTransitionTime":"2025-11-25T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.803305 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.803359 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.803372 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.803392 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.803406 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:50Z","lastTransitionTime":"2025-11-25T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.846509 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.846444 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.846748 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:50 crc kubenswrapper[4775]: E1125 19:34:50.847014 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:50 crc kubenswrapper[4775]: E1125 19:34:50.847128 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:50 crc kubenswrapper[4775]: E1125 19:34:50.847382 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.847454 4775 scope.go:117] "RemoveContainer" containerID="70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.907693 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.907744 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.907794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.907815 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:50 crc kubenswrapper[4775]: I1125 19:34:50.907829 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:50Z","lastTransitionTime":"2025-11-25T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.011698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.011767 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.011783 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.011806 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.011823 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:51Z","lastTransitionTime":"2025-11-25T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.114326 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.114377 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.114387 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.114403 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.114415 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:51Z","lastTransitionTime":"2025-11-25T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.218021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.218092 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.218107 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.218128 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.218145 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:51Z","lastTransitionTime":"2025-11-25T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.320927 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.320974 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.320989 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.321007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.321022 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:51Z","lastTransitionTime":"2025-11-25T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.382987 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/2.log" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.385896 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerStarted","Data":"a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5"} Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.386404 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.420074 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.425072 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.425129 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.425142 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.425162 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.425175 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:51Z","lastTransitionTime":"2025-11-25T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.436084 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.453891 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.466969 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.482534 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.496296 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.511740 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0214a60a160bcf831db4a80d10761356a50ea831420fe32966eb42ba3de54426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:47Z\\\",\\\"message\\\":\\\"2025-11-25T19:34:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415\\\\n2025-11-25T19:34:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415 to /host/opt/cni/bin/\\\\n2025-11-25T19:34:02Z [verbose] multus-daemon started\\\\n2025-11-25T19:34:02Z [verbose] Readiness Indicator file check\\\\n2025-11-25T19:34:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.528472 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.528517 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.528530 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.528550 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.528563 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:51Z","lastTransitionTime":"2025-11-25T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.528974 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.543581 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2cea07-f9ff-405f-a2cc-3bc0b329faba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad64a22adbab6e2dfb0a2b3491957bf199625f65eb944136f9e74100ca4323a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.568948 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.614392 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.631256 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.631302 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.631314 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.631333 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.631346 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:51Z","lastTransitionTime":"2025-11-25T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.636456 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.651949 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.665515 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.675641 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.692253 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.714006 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:27Z\\\",\\\"message\\\":\\\"r/openshift-kube-scheduler-crc in node crc\\\\nI1125 19:34:27.908090 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI1125 19:34:27.908100 6431 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1125 19:34:27.907799 6431 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1125 19:34:27.908139 6431 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1125 19:34:27.908155 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1125 19:34:27.908163 6431 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1125 19:34:27.908140 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.731321 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:51Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.734467 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.734529 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.734544 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.734562 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.734573 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:51Z","lastTransitionTime":"2025-11-25T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.838275 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.838349 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.838368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.838395 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.838419 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:51Z","lastTransitionTime":"2025-11-25T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.846525 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:51 crc kubenswrapper[4775]: E1125 19:34:51.846754 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.941067 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.941121 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.941134 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.941153 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:51 crc kubenswrapper[4775]: I1125 19:34:51.941164 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:51Z","lastTransitionTime":"2025-11-25T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.045598 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.045685 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.045705 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.045731 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.045763 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:52Z","lastTransitionTime":"2025-11-25T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.148961 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.149015 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.149026 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.149045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.149061 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:52Z","lastTransitionTime":"2025-11-25T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.252346 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.252424 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.252446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.252475 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.252499 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:52Z","lastTransitionTime":"2025-11-25T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.356923 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.357003 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.357024 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.357054 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.357074 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:52Z","lastTransitionTime":"2025-11-25T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.392875 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/3.log" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.394387 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/2.log" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.399411 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerID="a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5" exitCode=1 Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.399499 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5"} Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.399601 4775 scope.go:117] "RemoveContainer" containerID="70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.400509 4775 scope.go:117] "RemoveContainer" containerID="a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5" Nov 25 19:34:52 crc kubenswrapper[4775]: E1125 19:34:52.400763 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.420149 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.449036 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.461411 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.461487 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.461507 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.461558 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.461580 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:52Z","lastTransitionTime":"2025-11-25T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.468589 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.488942 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.513040 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.535982 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70be8a4bee45d39adfdefd480862fb8582d32bb8f181b794be927861e94cdb2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:27Z\\\",\\\"message\\\":\\\"r/openshift-kube-scheduler-crc in node crc\\\\nI1125 19:34:27.908090 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI1125 19:34:27.908100 6431 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1125 19:34:27.907799 6431 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1125 19:34:27.908139 6431 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1125 19:34:27.908155 6431 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1125 19:34:27.908163 6431 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1125 19:34:27.908140 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:51Z\\\",\\\"message\\\":\\\"] Service openshift-kube-apiserver/apiserver retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{apiserver openshift-kube-apiserver 1c35c6c5-2bb9-4633-8ecb-881a1ff8d2fe 7887 0 2025-02-23 05:33:28 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1125 19:34:51.857803 6765 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.555023 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.564784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.564841 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.564860 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.564889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.564908 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:52Z","lastTransitionTime":"2025-11-25T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.573542 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.595844 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.613471 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.632046 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.653670 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.668287 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.668352 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.668365 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.668386 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.668401 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:52Z","lastTransitionTime":"2025-11-25T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.670506 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.684871 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.704561 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0214a60a160bcf831db4a80d10761356a50ea831420fe32966eb42ba3de54426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:47Z\\\",\\\"message\\\":\\\"2025-11-25T19:34:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415\\\\n2025-11-25T19:34:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415 to /host/opt/cni/bin/\\\\n2025-11-25T19:34:02Z [verbose] multus-daemon started\\\\n2025-11-25T19:34:02Z [verbose] Readiness Indicator file check\\\\n2025-11-25T19:34:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.717302 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.729116 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2cea07-f9ff-405f-a2cc-3bc0b329faba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad64a22adbab6e2dfb0a2b3491957bf199625f65eb944136f9e74100ca4323a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.742062 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:52Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.771323 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.771368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.771385 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.771409 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.771428 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:52Z","lastTransitionTime":"2025-11-25T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.846906 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.846915 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:52 crc kubenswrapper[4775]: E1125 19:34:52.847137 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.846928 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:52 crc kubenswrapper[4775]: E1125 19:34:52.847302 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:52 crc kubenswrapper[4775]: E1125 19:34:52.847527 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.874261 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.874335 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.874362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.874402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.874426 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:52Z","lastTransitionTime":"2025-11-25T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.977404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.977452 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.977465 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.977483 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:52 crc kubenswrapper[4775]: I1125 19:34:52.977495 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:52Z","lastTransitionTime":"2025-11-25T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.080109 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.080144 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.080154 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.080170 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.080182 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:53Z","lastTransitionTime":"2025-11-25T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.183064 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.183100 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.183108 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.183123 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.183134 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:53Z","lastTransitionTime":"2025-11-25T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.285809 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.285881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.285892 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.285909 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.285920 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:53Z","lastTransitionTime":"2025-11-25T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.389443 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.389535 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.389555 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.389585 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.389612 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:53Z","lastTransitionTime":"2025-11-25T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.406090 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/3.log" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.411388 4775 scope.go:117] "RemoveContainer" containerID="a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5" Nov 25 19:34:53 crc kubenswrapper[4775]: E1125 19:34:53.411800 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.432961 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2cea07-f9ff-405f-a2cc-3bc0b329faba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad64a22adbab6e2dfb0a2b3491957bf199625f65eb944136f9e74100ca4323a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.456072 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.477737 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.493121 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.493204 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.493234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.493275 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.493301 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:53Z","lastTransitionTime":"2025-11-25T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.501511 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0214a60a160bcf831db4a80d10761356a50ea831420fe32966eb42ba3de54426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:47Z\\\",\\\"message\\\":\\\"2025-11-25T19:34:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415\\\\n2025-11-25T19:34:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415 to /host/opt/cni/bin/\\\\n2025-11-25T19:34:02Z [verbose] multus-daemon started\\\\n2025-11-25T19:34:02Z [verbose] Readiness Indicator file check\\\\n2025-11-25T19:34:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.522266 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.548743 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.573536 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.595125 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.597077 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.597140 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.597166 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.597203 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.597231 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:53Z","lastTransitionTime":"2025-11-25T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.626213 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.651090 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.685711 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:51Z\\\",\\\"message\\\":\\\"] Service openshift-kube-apiserver/apiserver retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{apiserver openshift-kube-apiserver 1c35c6c5-2bb9-4633-8ecb-881a1ff8d2fe 7887 0 2025-02-23 05:33:28 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1125 19:34:51.857803 6765 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.700175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.700231 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.700244 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.700264 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.700278 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:53Z","lastTransitionTime":"2025-11-25T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.704137 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.723869 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.740449 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.760217 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.777621 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.803289 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.803364 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.803386 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.803413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.803434 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:53Z","lastTransitionTime":"2025-11-25T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.804878 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.826248 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:53Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.846583 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:53 crc kubenswrapper[4775]: E1125 19:34:53.846864 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.906886 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.906952 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.906973 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.907000 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:53 crc kubenswrapper[4775]: I1125 19:34:53.907020 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:53Z","lastTransitionTime":"2025-11-25T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.010507 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.010584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.010608 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.010638 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.010696 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:54Z","lastTransitionTime":"2025-11-25T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.114208 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.114299 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.114317 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.114343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.114364 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:54Z","lastTransitionTime":"2025-11-25T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.218791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.218867 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.218893 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.218929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.218957 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:54Z","lastTransitionTime":"2025-11-25T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.323131 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.323237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.323257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.323285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.323305 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:54Z","lastTransitionTime":"2025-11-25T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.426166 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.426239 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.426263 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.426291 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.426310 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:54Z","lastTransitionTime":"2025-11-25T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.529537 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.529633 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.529691 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.529721 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.529742 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:54Z","lastTransitionTime":"2025-11-25T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.633535 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.633598 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.633618 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.633675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.633701 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:54Z","lastTransitionTime":"2025-11-25T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.737396 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.737451 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.737468 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.737495 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.737513 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:54Z","lastTransitionTime":"2025-11-25T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.841194 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.841271 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.841292 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.841320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.841339 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:54Z","lastTransitionTime":"2025-11-25T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.846575 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.846602 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:54 crc kubenswrapper[4775]: E1125 19:34:54.846895 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.846955 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:54 crc kubenswrapper[4775]: E1125 19:34:54.847147 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:54 crc kubenswrapper[4775]: E1125 19:34:54.847343 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.945262 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.945335 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.945354 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.945383 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:54 crc kubenswrapper[4775]: I1125 19:34:54.945458 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:54Z","lastTransitionTime":"2025-11-25T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.049774 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.049843 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.049865 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.049895 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.049918 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:55Z","lastTransitionTime":"2025-11-25T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.153071 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.153128 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.153145 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.153169 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.153189 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:55Z","lastTransitionTime":"2025-11-25T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.256117 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.256189 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.256213 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.256239 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.256258 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:55Z","lastTransitionTime":"2025-11-25T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.359877 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.359964 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.359985 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.360018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.360058 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:55Z","lastTransitionTime":"2025-11-25T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.463911 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.463993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.464018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.464049 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.464072 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:55Z","lastTransitionTime":"2025-11-25T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.568402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.568471 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.568489 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.568515 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.568534 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:55Z","lastTransitionTime":"2025-11-25T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.671721 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.671804 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.671832 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.671866 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.671888 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:55Z","lastTransitionTime":"2025-11-25T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.775703 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.775784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.775808 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.775842 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.775864 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:55Z","lastTransitionTime":"2025-11-25T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.846945 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:55 crc kubenswrapper[4775]: E1125 19:34:55.847219 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.879817 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.879872 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.879894 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.879918 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.879936 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:55Z","lastTransitionTime":"2025-11-25T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.983615 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.983739 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.983757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.983781 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:55 crc kubenswrapper[4775]: I1125 19:34:55.983799 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:55Z","lastTransitionTime":"2025-11-25T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.087138 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.087644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.087880 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.088039 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.088184 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.191886 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.191937 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.191954 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.191980 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.191999 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.295704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.295755 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.295765 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.295787 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.295798 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.399210 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.399279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.399296 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.399324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.399343 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.502793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.502865 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.502890 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.502925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.502957 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.607241 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.607298 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.607310 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.607329 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.607344 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.649109 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.649200 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.649231 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.649267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.649287 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: E1125 19:34:56.671101 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:56Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.676723 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.676781 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.676809 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.676829 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.676840 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: E1125 19:34:56.697585 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:56Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.703415 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.703500 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.703512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.703527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.703538 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: E1125 19:34:56.723755 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:56Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.729551 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.729637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.729687 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.729748 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.729768 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: E1125 19:34:56.750069 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:56Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.755369 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.755423 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.755432 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.755448 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.755458 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: E1125 19:34:56.773273 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:56Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:56 crc kubenswrapper[4775]: E1125 19:34:56.773524 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.776584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.776633 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.776673 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.776693 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.776708 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.846398 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.846437 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:56 crc kubenswrapper[4775]: E1125 19:34:56.846563 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.846812 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:56 crc kubenswrapper[4775]: E1125 19:34:56.846897 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:56 crc kubenswrapper[4775]: E1125 19:34:56.847157 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.879234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.879273 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.879282 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.879299 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.879310 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.983469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.983548 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.983567 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.983595 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:56 crc kubenswrapper[4775]: I1125 19:34:56.983615 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:56Z","lastTransitionTime":"2025-11-25T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.087135 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.087206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.087219 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.087244 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.087259 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:57Z","lastTransitionTime":"2025-11-25T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.190765 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.190824 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.190842 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.190868 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.190887 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:57Z","lastTransitionTime":"2025-11-25T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.293612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.293762 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.293787 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.293819 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.293843 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:57Z","lastTransitionTime":"2025-11-25T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.397457 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.397517 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.397534 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.397561 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.397581 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:57Z","lastTransitionTime":"2025-11-25T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.500741 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.500827 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.500854 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.500889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.500915 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:57Z","lastTransitionTime":"2025-11-25T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.604685 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.604777 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.604802 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.604834 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.604859 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:57Z","lastTransitionTime":"2025-11-25T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.707835 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.707887 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.707905 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.707934 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.707951 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:57Z","lastTransitionTime":"2025-11-25T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.811541 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.811609 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.811624 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.811644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.811685 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:57Z","lastTransitionTime":"2025-11-25T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.846839 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:57 crc kubenswrapper[4775]: E1125 19:34:57.846985 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.915693 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.915758 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.915775 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.915802 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:57 crc kubenswrapper[4775]: I1125 19:34:57.915818 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:57Z","lastTransitionTime":"2025-11-25T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.018827 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.018897 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.018915 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.018942 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.018967 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:58Z","lastTransitionTime":"2025-11-25T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.122932 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.123011 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.123032 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.123064 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.123084 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:58Z","lastTransitionTime":"2025-11-25T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.225912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.225978 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.226021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.226055 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.226079 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:58Z","lastTransitionTime":"2025-11-25T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.329639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.329747 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.329770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.329800 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.329822 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:58Z","lastTransitionTime":"2025-11-25T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.432546 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.432612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.432635 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.432689 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.432708 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:58Z","lastTransitionTime":"2025-11-25T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.536573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.536678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.536700 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.536729 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.536750 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:58Z","lastTransitionTime":"2025-11-25T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.640738 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.640810 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.640831 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.640861 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.640882 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:58Z","lastTransitionTime":"2025-11-25T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.743908 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.743982 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.743999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.744027 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.744046 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:58Z","lastTransitionTime":"2025-11-25T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.846390 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:34:58 crc kubenswrapper[4775]: E1125 19:34:58.846610 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.846404 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.846986 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:34:58 crc kubenswrapper[4775]: E1125 19:34:58.847063 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:34:58 crc kubenswrapper[4775]: E1125 19:34:58.847089 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.848247 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.848307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.848326 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.848354 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.848374 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:58Z","lastTransitionTime":"2025-11-25T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.872815 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:58Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.896338 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:58Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.915195 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:58Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.934973 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:58Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.955035 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.955099 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.955117 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.955143 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.955161 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:58Z","lastTransitionTime":"2025-11-25T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.977836 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:51Z\\\",\\\"message\\\":\\\"] Service openshift-kube-apiserver/apiserver retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{apiserver openshift-kube-apiserver 1c35c6c5-2bb9-4633-8ecb-881a1ff8d2fe 7887 0 2025-02-23 05:33:28 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1125 19:34:51.857803 6765 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:58Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:58 crc kubenswrapper[4775]: I1125 19:34:58.996081 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:58Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.018078 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:59Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.037911 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:59Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.059428 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.059487 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.059507 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.059537 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.059561 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:59Z","lastTransitionTime":"2025-11-25T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.059797 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:59Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.079347 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:59Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.105786 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:59Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.122208 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:59Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.138369 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:59Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.159269 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:59Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.161970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.162033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.162052 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.162080 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.162100 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:59Z","lastTransitionTime":"2025-11-25T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.178083 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:59Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.202723 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0214a60a160bcf831db4a80d10761356a50ea831420fe32966eb42ba3de54426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:47Z\\\",\\\"message\\\":\\\"2025-11-25T19:34:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415\\\\n2025-11-25T19:34:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415 to /host/opt/cni/bin/\\\\n2025-11-25T19:34:02Z [verbose] multus-daemon started\\\\n2025-11-25T19:34:02Z [verbose] Readiness Indicator file check\\\\n2025-11-25T19:34:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:59Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.220965 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:59Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.239124 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2cea07-f9ff-405f-a2cc-3bc0b329faba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad64a22adbab6e2dfb0a2b3491957bf199625f65eb944136f9e74100ca4323a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:34:59Z is after 2025-08-24T17:21:41Z" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.265250 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.265300 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.265318 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.265347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.265367 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:59Z","lastTransitionTime":"2025-11-25T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.368164 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.368217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.368229 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.368249 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.368262 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:59Z","lastTransitionTime":"2025-11-25T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.471831 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.471910 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.471929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.471955 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.471980 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:59Z","lastTransitionTime":"2025-11-25T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.575541 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.575576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.575585 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.575602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.575613 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:59Z","lastTransitionTime":"2025-11-25T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.679283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.679362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.679380 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.679407 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.679426 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:59Z","lastTransitionTime":"2025-11-25T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.782478 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.782543 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.782563 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.782589 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.782608 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:59Z","lastTransitionTime":"2025-11-25T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.846499 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:34:59 crc kubenswrapper[4775]: E1125 19:34:59.847018 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.886140 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.886211 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.886228 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.886256 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.886279 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:59Z","lastTransitionTime":"2025-11-25T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.990174 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.990269 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.990294 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.990327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:34:59 crc kubenswrapper[4775]: I1125 19:34:59.990355 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:34:59Z","lastTransitionTime":"2025-11-25T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.094251 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.094360 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.094419 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.094446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.094466 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:00Z","lastTransitionTime":"2025-11-25T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.198809 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.199070 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.199089 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.199118 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.199137 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:00Z","lastTransitionTime":"2025-11-25T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.302131 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.302234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.302260 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.302295 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.302319 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:00Z","lastTransitionTime":"2025-11-25T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.407153 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.407221 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.407241 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.407271 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.407292 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:00Z","lastTransitionTime":"2025-11-25T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.510504 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.510584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.510608 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.510689 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.510718 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:00Z","lastTransitionTime":"2025-11-25T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.614096 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.614169 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.614191 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.614222 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.614247 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:00Z","lastTransitionTime":"2025-11-25T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.717962 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.718063 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.718097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.718137 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.718166 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:00Z","lastTransitionTime":"2025-11-25T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.822362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.822429 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.822449 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.822479 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.822503 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:00Z","lastTransitionTime":"2025-11-25T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.846437 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.846437 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:00 crc kubenswrapper[4775]: E1125 19:35:00.846731 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.846448 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:00 crc kubenswrapper[4775]: E1125 19:35:00.846863 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:00 crc kubenswrapper[4775]: E1125 19:35:00.847072 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.927670 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.927733 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.927746 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.927769 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:00 crc kubenswrapper[4775]: I1125 19:35:00.927785 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:00Z","lastTransitionTime":"2025-11-25T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.031268 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.031328 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.031349 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.031376 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.031399 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:01Z","lastTransitionTime":"2025-11-25T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.134405 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.134469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.134481 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.134503 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.134518 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:01Z","lastTransitionTime":"2025-11-25T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.238462 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.238542 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.238561 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.238594 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.238616 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:01Z","lastTransitionTime":"2025-11-25T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.342499 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.342554 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.342566 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.342626 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.342668 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:01Z","lastTransitionTime":"2025-11-25T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.445106 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.445169 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.445185 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.445206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.445225 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:01Z","lastTransitionTime":"2025-11-25T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.549345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.549429 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.549451 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.549486 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.549509 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:01Z","lastTransitionTime":"2025-11-25T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.653495 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.653571 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.653591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.653619 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.653638 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:01Z","lastTransitionTime":"2025-11-25T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.756738 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.756798 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.756812 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.756832 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.756848 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:01Z","lastTransitionTime":"2025-11-25T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.846576 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:01 crc kubenswrapper[4775]: E1125 19:35:01.846858 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.860927 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.861379 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.861398 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.861425 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.861444 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:01Z","lastTransitionTime":"2025-11-25T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.964621 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.964693 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.964718 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.964748 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:01 crc kubenswrapper[4775]: I1125 19:35:01.964766 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:01Z","lastTransitionTime":"2025-11-25T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.068101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.068189 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.068207 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.068234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.068254 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:02Z","lastTransitionTime":"2025-11-25T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.172749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.172806 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.172821 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.172844 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.172857 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:02Z","lastTransitionTime":"2025-11-25T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.275984 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.276044 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.276058 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.276080 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.276095 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:02Z","lastTransitionTime":"2025-11-25T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.379033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.379102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.379121 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.379148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.379166 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:02Z","lastTransitionTime":"2025-11-25T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.481724 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.481797 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.481816 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.481842 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.481862 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:02Z","lastTransitionTime":"2025-11-25T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.585263 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.585373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.585393 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.585427 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.585450 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:02Z","lastTransitionTime":"2025-11-25T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.689465 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.689523 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.689535 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.689553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.689567 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:02Z","lastTransitionTime":"2025-11-25T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.710694 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.710887 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.710855784 +0000 UTC m=+148.627218160 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.710966 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.711021 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.711064 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.711110 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.711189 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.711216 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.711231 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.711235 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.711280 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.711271864 +0000 UTC m=+148.627634230 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.711294 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.711288225 +0000 UTC m=+148.627650591 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.711349 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.711371 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.711386 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.711430 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.711413978 +0000 UTC m=+148.627776374 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.711485 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.711525 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.71151079 +0000 UTC m=+148.627873186 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.794126 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.794177 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.794190 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.794209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.794221 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:02Z","lastTransitionTime":"2025-11-25T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.847072 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.847072 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.847183 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.847336 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.847424 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:02 crc kubenswrapper[4775]: E1125 19:35:02.847482 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.897293 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.897352 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.897363 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.897380 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:02 crc kubenswrapper[4775]: I1125 19:35:02.897392 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:02Z","lastTransitionTime":"2025-11-25T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.001263 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.001318 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.001333 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.001354 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.001369 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:03Z","lastTransitionTime":"2025-11-25T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.104092 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.104158 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.104174 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.104194 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.104205 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:03Z","lastTransitionTime":"2025-11-25T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.207158 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.207432 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.207508 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.207582 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.207670 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:03Z","lastTransitionTime":"2025-11-25T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.311100 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.311167 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.311191 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.311227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.311254 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:03Z","lastTransitionTime":"2025-11-25T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.415251 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.415329 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.415353 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.415388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.415412 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:03Z","lastTransitionTime":"2025-11-25T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.519026 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.519121 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.519144 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.519209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.519236 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:03Z","lastTransitionTime":"2025-11-25T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.623335 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.623396 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.623414 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.623442 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.623461 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:03Z","lastTransitionTime":"2025-11-25T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.726833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.726913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.726936 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.726963 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.726983 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:03Z","lastTransitionTime":"2025-11-25T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.831246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.831360 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.831389 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.831420 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.831445 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:03Z","lastTransitionTime":"2025-11-25T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.846957 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:03 crc kubenswrapper[4775]: E1125 19:35:03.847146 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.934558 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.934636 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.934696 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.934735 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:03 crc kubenswrapper[4775]: I1125 19:35:03.934762 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:03Z","lastTransitionTime":"2025-11-25T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.038732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.038804 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.038827 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.038859 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.038881 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:04Z","lastTransitionTime":"2025-11-25T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.142476 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.142536 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.142552 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.142576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.142593 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:04Z","lastTransitionTime":"2025-11-25T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.246121 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.246206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.246225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.246256 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.246276 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:04Z","lastTransitionTime":"2025-11-25T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.350013 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.350061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.350076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.350094 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.350109 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:04Z","lastTransitionTime":"2025-11-25T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.453206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.453246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.453256 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.453272 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.453283 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:04Z","lastTransitionTime":"2025-11-25T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.556551 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.556607 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.556627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.556686 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.556707 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:04Z","lastTransitionTime":"2025-11-25T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.667180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.667260 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.667281 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.667312 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.667333 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:04Z","lastTransitionTime":"2025-11-25T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.771979 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.772508 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.772714 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.772873 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.773016 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:04Z","lastTransitionTime":"2025-11-25T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.846199 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.846302 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:04 crc kubenswrapper[4775]: E1125 19:35:04.847282 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.846383 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:04 crc kubenswrapper[4775]: E1125 19:35:04.847367 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:04 crc kubenswrapper[4775]: E1125 19:35:04.847717 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.876218 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.876285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.876299 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.876319 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.876334 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:04Z","lastTransitionTime":"2025-11-25T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.980377 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.980454 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.980473 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.980505 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:04 crc kubenswrapper[4775]: I1125 19:35:04.980526 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:04Z","lastTransitionTime":"2025-11-25T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.084833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.084898 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.084918 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.084946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.084965 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:05Z","lastTransitionTime":"2025-11-25T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.189124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.189217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.189252 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.189289 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.189308 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:05Z","lastTransitionTime":"2025-11-25T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.292054 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.292151 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.292174 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.292204 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.292222 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:05Z","lastTransitionTime":"2025-11-25T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.395475 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.395541 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.395561 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.395584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.395599 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:05Z","lastTransitionTime":"2025-11-25T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.498624 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.498732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.498753 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.498781 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.498801 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:05Z","lastTransitionTime":"2025-11-25T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.602206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.602272 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.602298 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.602331 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.602355 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:05Z","lastTransitionTime":"2025-11-25T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.705644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.705783 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.705803 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.705835 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.705854 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:05Z","lastTransitionTime":"2025-11-25T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.809728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.809806 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.809825 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.809854 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.809875 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:05Z","lastTransitionTime":"2025-11-25T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.846462 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:05 crc kubenswrapper[4775]: E1125 19:35:05.847633 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.913847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.913915 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.913934 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.913958 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:05 crc kubenswrapper[4775]: I1125 19:35:05.913973 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:05Z","lastTransitionTime":"2025-11-25T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.017432 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.017498 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.017522 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.017553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.017576 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:06Z","lastTransitionTime":"2025-11-25T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.120750 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.120833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.120852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.120874 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.120888 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:06Z","lastTransitionTime":"2025-11-25T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.225508 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.225572 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.225834 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.225925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.225956 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:06Z","lastTransitionTime":"2025-11-25T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.329981 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.330072 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.330102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.330132 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.330156 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:06Z","lastTransitionTime":"2025-11-25T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.432854 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.432892 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.432910 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.432926 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.432940 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:06Z","lastTransitionTime":"2025-11-25T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.536443 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.536507 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.536519 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.536536 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.536548 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:06Z","lastTransitionTime":"2025-11-25T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.641148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.641234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.641253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.641281 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.641300 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:06Z","lastTransitionTime":"2025-11-25T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.744368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.744417 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.744427 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.744443 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.744455 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:06Z","lastTransitionTime":"2025-11-25T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.845999 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.846081 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.846117 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:06 crc kubenswrapper[4775]: E1125 19:35:06.846486 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:06 crc kubenswrapper[4775]: E1125 19:35:06.846578 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:06 crc kubenswrapper[4775]: E1125 19:35:06.846202 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.849191 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.849258 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.849277 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.849300 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.849325 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:06Z","lastTransitionTime":"2025-11-25T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.952755 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.952832 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.952850 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.952877 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:06 crc kubenswrapper[4775]: I1125 19:35:06.952895 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:06Z","lastTransitionTime":"2025-11-25T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.057139 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.057216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.057233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.057711 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.057762 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.059733 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.059782 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.059793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.059811 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.059823 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: E1125 19:35:07.075276 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.081274 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.081369 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.081388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.081414 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.081434 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: E1125 19:35:07.104414 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.109810 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.109875 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.109887 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.109934 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.109954 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: E1125 19:35:07.128998 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.134241 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.134321 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.134341 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.134370 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.134389 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: E1125 19:35:07.160798 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.165116 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.165166 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.165176 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.165192 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.165204 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: E1125 19:35:07.187219 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T19:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1976b9c3-06ba-426e-8e28-5609feece292\\\",\\\"systemUUID\\\":\\\"4bfe9575-225a-4848-84aa-a2e7c416ae57\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:07Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:07 crc kubenswrapper[4775]: E1125 19:35:07.187328 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.188983 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.189021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.189035 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.189049 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.189058 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.292020 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.292094 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.292114 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.292143 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.292160 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.395333 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.395406 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.395423 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.395450 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.395470 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.499498 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.499584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.499608 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.499642 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.499700 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.602158 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.602207 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.602216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.602232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.602245 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.704668 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.704731 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.704747 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.704767 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.704780 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.808257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.808309 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.808323 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.808343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.808356 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.846611 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:07 crc kubenswrapper[4775]: E1125 19:35:07.847004 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.910870 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.910930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.910949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.910971 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:07 crc kubenswrapper[4775]: I1125 19:35:07.910988 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:07Z","lastTransitionTime":"2025-11-25T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.013959 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.014032 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.014075 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.014098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.014112 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:08Z","lastTransitionTime":"2025-11-25T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.118193 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.118284 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.118304 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.118335 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.118357 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:08Z","lastTransitionTime":"2025-11-25T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.221169 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.221265 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.221286 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.221315 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.221335 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:08Z","lastTransitionTime":"2025-11-25T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.324723 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.324803 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.324826 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.324854 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.324876 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:08Z","lastTransitionTime":"2025-11-25T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.427524 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.427596 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.427622 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.427703 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.427731 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:08Z","lastTransitionTime":"2025-11-25T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.531849 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.531921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.531945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.531977 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.532001 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:08Z","lastTransitionTime":"2025-11-25T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.635255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.635353 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.635373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.635401 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.635420 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:08Z","lastTransitionTime":"2025-11-25T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.739231 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.739288 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.739304 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.739328 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.739348 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:08Z","lastTransitionTime":"2025-11-25T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.843102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.843145 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.843162 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.843187 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.843209 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:08Z","lastTransitionTime":"2025-11-25T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.846580 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.846758 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:08 crc kubenswrapper[4775]: E1125 19:35:08.846809 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.846942 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:08 crc kubenswrapper[4775]: E1125 19:35:08.847259 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:08 crc kubenswrapper[4775]: E1125 19:35:08.847948 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.848487 4775 scope.go:117] "RemoveContainer" containerID="a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5" Nov 25 19:35:08 crc kubenswrapper[4775]: E1125 19:35:08.848865 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.870757 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4389cf71-c2f1-406d-ac63-ee8a23564e78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c768295f7d6276eaab127428e5735d6585781d23196c6af4489c2a6b7650136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e357600021811f9bed85cc2b177e332708ef766650cad04fca15bb2a40ae70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a92718e25a1172db70cce688c041fcaa76bf146d14dd4c7a602e3369b91082e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34186d69c046ce8634582d9bfb3c4e3b63dd3c38678201c387ea47d95a6663b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.891481 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.913236 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.930843 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8p9p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3099556d-7e22-4d2c-9dcc-1a8465a2bd32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c8536002e4df1b54b4f9f92cfa063d4bb2555180ee073bee91498821912370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlvth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8p9p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.946956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.947010 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.947024 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.947048 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.947063 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:08Z","lastTransitionTime":"2025-11-25T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.962749 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31e75bd7-c713-4504-a912-0ebfdad65c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 19:33:59.565369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 19:33:59.565604 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 19:33:59.567918 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1631218385/tls.crt::/tmp/serving-cert-1631218385/tls.key\\\\\\\"\\\\nI1125 19:33:59.951561 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 19:33:59.955704 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 19:33:59.955725 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 19:33:59.955747 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 19:33:59.955755 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 19:33:59.965550 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 19:33:59.965584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965589 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 19:33:59.965593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 19:33:59.965596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 19:33:59.965600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 19:33:59.965603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 19:33:59.965798 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 19:33:59.973187 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:08 crc kubenswrapper[4775]: I1125 19:35:08.994699 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b02c35a-be66-4cf6-afc0-12ddc2f74148\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:51Z\\\",\\\"message\\\":\\\"] Service openshift-kube-apiserver/apiserver retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{apiserver openshift-kube-apiserver 1c35c6c5-2bb9-4633-8ecb-881a1ff8d2fe 7887 0 2025-02-23 05:33:28 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1125 19:34:51.857803 6765 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7q6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x28tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:08Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.013937 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4349a7c-699e-446c-ac37-7fbf6310803d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050f8b3fd7a7ee693a5f7a0a0ae9a13b2f0be12f64a2e6d8f1310a5bf9f887eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56825bd016b0957af499784a8d64c7d7eadc5d107c96c776a6a2b2b3c362b453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w7gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w98l4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.030267 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-94nmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba22b2a3-bdc5-4523-9574-9111a506778a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd989e1cd6021aee2b92c14e728f1df2513c02e7e646b50f7e1105ea3ff3961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztrv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-94nmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.048978 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403a429-596b-4a0b-a715-cf342eee95fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4032b5b34cb8d34ff173d58576973fd70bbd2334e9c7a5a54544015820ef28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://454e6a46a8074d1c293b817421752a23dd32f64a304f4ba71eff58906b8cf1ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db15a635adc7617fdbb906e46a00a6723909b6be55ab26afadf23bd42930eab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.050149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.050226 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.050248 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.050283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.050308 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:09Z","lastTransitionTime":"2025-11-25T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.071238 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a536c06bd6e8c0996cff4b0c6891ca2c3df37e9e5344fc826083a8c704b1483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.089579 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b3463317901e7e8a1dcbcbb62e22dcc42b42d593568e53d68c292f4de6abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c00e04f098de821a1fd57d7d4aa0833eeb7f500f62a9e584ece9bb1f70445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.102250 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdb8b79f-4ccd-4606-8f27-e26301ffc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6664e5656b19173a6d2c77b288130de1cbf0c2e00070a3af4259ff0e83a91b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zckkq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w4zbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.123859 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vwq64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc4e8832-7db1-4026-aff5-c6d34b2b8f99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa14f363ce43b5393556ecfee09ba4acb2aef97631ed069174579ec8f522c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a848402962b64a8454fb97dfb294344211f2bd55acde9535c83572ab0fb979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a92b64c45958adb5bbb37f995e6fe29179eb181e1fab1c3afd0679b9bde9a0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86f96d86b4e95cd9435a50da015aa286668cdf5c73439c2d8d98998c170652f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99c5532c19b5176bec766ff5cf1953af2026ef992c672d91010de36f664abb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c421ce46b207b251d7e7e6725a3fbc0f53b283ed407f1998cc5f9f0572feb986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3b719bcfed6135ce55eeed2ee1f585e2a64168ab5c88a89ae1cb76f0dac365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdjfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vwq64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.135528 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69dvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69dvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.145169 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2cea07-f9ff-405f-a2cc-3bc0b329faba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad64a22adbab6e2dfb0a2b3491957bf199625f65eb944136f9e74100ca4323a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:33:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03b071fd507135b8af83a6f9b7c18c1480dfd8fa2f38c1f945da6f6790f4eaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T19:33:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T19:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:33:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.153161 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.153225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.153245 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.153272 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.153294 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:09Z","lastTransitionTime":"2025-11-25T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.166111 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.181265 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d5b19a7e2c09c8a69aca66c274c1c5bc48aa08be80facd6026320ecb529b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.202378 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qf2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"850f083c-ad86-47bb-8fd1-4f2a4a9e7831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T19:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0214a60a160bcf831db4a80d10761356a50ea831420fe32966eb42ba3de54426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T19:34:47Z\\\",\\\"message\\\":\\\"2025-11-25T19:34:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415\\\\n2025-11-25T19:34:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a8fc26e8-ee24-4af5-8200-b90616c03415 to /host/opt/cni/bin/\\\\n2025-11-25T19:34:02Z [verbose] multus-daemon started\\\\n2025-11-25T19:34:02Z [verbose] Readiness Indicator file check\\\\n2025-11-25T19:34:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T19:34:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppm9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T19:34:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qf2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T19:35:09Z is after 2025-08-24T17:21:41Z" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.257678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.257765 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.257790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.257817 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.257836 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:09Z","lastTransitionTime":"2025-11-25T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.361714 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.361769 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.361786 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.361814 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.361834 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:09Z","lastTransitionTime":"2025-11-25T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.465524 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.465582 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.465594 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.465616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.465631 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:09Z","lastTransitionTime":"2025-11-25T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.569268 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.569345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.569371 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.569404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.569431 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:09Z","lastTransitionTime":"2025-11-25T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.672842 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.672927 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.672950 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.672986 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.673012 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:09Z","lastTransitionTime":"2025-11-25T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.777017 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.777094 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.777112 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.777137 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.777156 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:09Z","lastTransitionTime":"2025-11-25T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.846610 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:09 crc kubenswrapper[4775]: E1125 19:35:09.846873 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.880945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.881018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.881047 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.881082 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.881109 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:09Z","lastTransitionTime":"2025-11-25T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.984172 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.984330 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.984349 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.984371 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:09 crc kubenswrapper[4775]: I1125 19:35:09.984388 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:09Z","lastTransitionTime":"2025-11-25T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.088144 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.088232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.088249 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.088278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.088300 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:10Z","lastTransitionTime":"2025-11-25T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.192130 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.192188 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.192207 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.192233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.192252 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:10Z","lastTransitionTime":"2025-11-25T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.295134 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.295191 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.295208 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.295232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.295252 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:10Z","lastTransitionTime":"2025-11-25T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.398674 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.398732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.398751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.398800 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.398820 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:10Z","lastTransitionTime":"2025-11-25T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.501728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.501776 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.501786 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.501804 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.501817 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:10Z","lastTransitionTime":"2025-11-25T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.604507 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.604544 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.604555 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.604570 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.604583 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:10Z","lastTransitionTime":"2025-11-25T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.708287 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.708345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.708358 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.708381 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.708402 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:10Z","lastTransitionTime":"2025-11-25T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.812026 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.812088 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.812106 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.812128 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.812144 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:10Z","lastTransitionTime":"2025-11-25T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.847142 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.847244 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.847281 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:10 crc kubenswrapper[4775]: E1125 19:35:10.847438 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:10 crc kubenswrapper[4775]: E1125 19:35:10.847596 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:10 crc kubenswrapper[4775]: E1125 19:35:10.847798 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.917211 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.917280 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.917312 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.917344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:10 crc kubenswrapper[4775]: I1125 19:35:10.917367 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:10Z","lastTransitionTime":"2025-11-25T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.021111 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.021173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.021186 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.021208 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.021224 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:11Z","lastTransitionTime":"2025-11-25T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.124016 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.124063 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.124079 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.124099 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.124117 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:11Z","lastTransitionTime":"2025-11-25T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.229115 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.229193 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.229211 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.229237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.229260 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:11Z","lastTransitionTime":"2025-11-25T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.332932 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.332988 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.333005 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.333030 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.333051 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:11Z","lastTransitionTime":"2025-11-25T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.436220 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.436285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.436302 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.436326 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.436346 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:11Z","lastTransitionTime":"2025-11-25T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.538991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.539050 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.539067 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.539092 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.539110 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:11Z","lastTransitionTime":"2025-11-25T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.642181 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.642254 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.642275 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.642303 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.642324 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:11Z","lastTransitionTime":"2025-11-25T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.745776 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.745851 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.745871 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.745892 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.745911 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:11Z","lastTransitionTime":"2025-11-25T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.847094 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:11 crc kubenswrapper[4775]: E1125 19:35:11.847954 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.849173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.849234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.849259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.849301 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.849324 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:11Z","lastTransitionTime":"2025-11-25T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.952739 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.952815 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.952838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.952869 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:11 crc kubenswrapper[4775]: I1125 19:35:11.952891 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:11Z","lastTransitionTime":"2025-11-25T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.056474 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.056553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.056578 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.056609 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.056633 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:12Z","lastTransitionTime":"2025-11-25T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.161214 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.161279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.161298 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.161322 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.161343 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:12Z","lastTransitionTime":"2025-11-25T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.265539 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.265634 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.265696 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.265728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.265748 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:12Z","lastTransitionTime":"2025-11-25T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.369768 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.369851 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.369877 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.369917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.369959 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:12Z","lastTransitionTime":"2025-11-25T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.473886 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.474467 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.474612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.474815 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.474975 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:12Z","lastTransitionTime":"2025-11-25T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.578005 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.578102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.578127 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.578162 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.578192 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:12Z","lastTransitionTime":"2025-11-25T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.681910 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.681979 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.681997 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.682024 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.682045 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:12Z","lastTransitionTime":"2025-11-25T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.785969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.786021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.786032 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.786050 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.786063 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:12Z","lastTransitionTime":"2025-11-25T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.846170 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.846215 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:12 crc kubenswrapper[4775]: E1125 19:35:12.846351 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.846388 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:12 crc kubenswrapper[4775]: E1125 19:35:12.846521 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:12 crc kubenswrapper[4775]: E1125 19:35:12.846715 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.889557 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.889636 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.889696 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.889725 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.889745 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:12Z","lastTransitionTime":"2025-11-25T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.995293 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.995642 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.995817 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.995974 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:12 crc kubenswrapper[4775]: I1125 19:35:12.996013 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:12Z","lastTransitionTime":"2025-11-25T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.101424 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.101491 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.101507 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.101537 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.101556 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:13Z","lastTransitionTime":"2025-11-25T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.206182 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.206292 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.206324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.206363 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.206389 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:13Z","lastTransitionTime":"2025-11-25T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.310672 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.310745 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.310762 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.310786 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.310804 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:13Z","lastTransitionTime":"2025-11-25T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.414272 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.414337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.414355 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.414379 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.414400 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:13Z","lastTransitionTime":"2025-11-25T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.518138 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.518219 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.518246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.518278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.518300 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:13Z","lastTransitionTime":"2025-11-25T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.622512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.622611 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.622638 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.622701 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.622726 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:13Z","lastTransitionTime":"2025-11-25T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.726873 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.726948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.726972 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.726999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.727018 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:13Z","lastTransitionTime":"2025-11-25T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.830723 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.830814 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.830837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.830865 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.830885 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:13Z","lastTransitionTime":"2025-11-25T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.846562 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:13 crc kubenswrapper[4775]: E1125 19:35:13.847351 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.934191 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.934247 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.934259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.934279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:13 crc kubenswrapper[4775]: I1125 19:35:13.934293 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:13Z","lastTransitionTime":"2025-11-25T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.038202 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.038284 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.038307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.038334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.038357 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:14Z","lastTransitionTime":"2025-11-25T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.142330 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.142400 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.142411 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.142431 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.142441 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:14Z","lastTransitionTime":"2025-11-25T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.245887 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.245943 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.245955 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.245977 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.245989 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:14Z","lastTransitionTime":"2025-11-25T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.350564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.350630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.350680 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.350709 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.350728 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:14Z","lastTransitionTime":"2025-11-25T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.455270 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.455362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.455388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.455422 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.455447 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:14Z","lastTransitionTime":"2025-11-25T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.559261 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.559332 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.559350 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.559377 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.559397 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:14Z","lastTransitionTime":"2025-11-25T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.663322 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.663389 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.663408 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.663437 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.663458 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:14Z","lastTransitionTime":"2025-11-25T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.766471 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.766543 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.766560 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.766590 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.766609 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:14Z","lastTransitionTime":"2025-11-25T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.847008 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.847042 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:14 crc kubenswrapper[4775]: E1125 19:35:14.847227 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.847330 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:14 crc kubenswrapper[4775]: E1125 19:35:14.847498 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:14 crc kubenswrapper[4775]: E1125 19:35:14.847575 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.870038 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.870146 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.870166 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.870191 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.870209 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:14Z","lastTransitionTime":"2025-11-25T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.973430 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.973499 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.973517 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.973544 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:14 crc kubenswrapper[4775]: I1125 19:35:14.973563 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:14Z","lastTransitionTime":"2025-11-25T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.077789 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.077870 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.077889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.077920 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.077940 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:15Z","lastTransitionTime":"2025-11-25T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.181378 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.181472 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.181496 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.181526 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.181550 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:15Z","lastTransitionTime":"2025-11-25T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.284820 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.284872 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.284885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.284903 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.284917 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:15Z","lastTransitionTime":"2025-11-25T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.392932 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.393024 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.393045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.393086 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.393106 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:15Z","lastTransitionTime":"2025-11-25T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.497245 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.497345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.497370 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.497409 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.497433 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:15Z","lastTransitionTime":"2025-11-25T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.601368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.601450 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.601472 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.601498 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.601552 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:15Z","lastTransitionTime":"2025-11-25T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.705473 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.705535 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.705583 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.705613 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.705634 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:15Z","lastTransitionTime":"2025-11-25T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.808885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.808951 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.808970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.808995 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.809016 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:15Z","lastTransitionTime":"2025-11-25T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.847097 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:15 crc kubenswrapper[4775]: E1125 19:35:15.847754 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.871042 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.912344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.912422 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.912439 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.912491 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:15 crc kubenswrapper[4775]: I1125 19:35:15.912511 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:15Z","lastTransitionTime":"2025-11-25T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.016785 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.017770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.017916 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.018079 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.018259 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:16Z","lastTransitionTime":"2025-11-25T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.122396 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.122489 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.122512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.122539 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.122558 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:16Z","lastTransitionTime":"2025-11-25T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.227352 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.227446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.227468 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.227502 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.227527 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:16Z","lastTransitionTime":"2025-11-25T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.330855 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.330914 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.330925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.330942 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.330956 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:16Z","lastTransitionTime":"2025-11-25T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.433343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.433401 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.433419 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.433445 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.433463 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:16Z","lastTransitionTime":"2025-11-25T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.536633 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.536730 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.536747 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.536779 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.536801 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:16Z","lastTransitionTime":"2025-11-25T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.640551 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.640620 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.640639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.640706 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.640727 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:16Z","lastTransitionTime":"2025-11-25T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.744726 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.744800 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.744819 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.744842 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.744861 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:16Z","lastTransitionTime":"2025-11-25T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.846555 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.846619 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.846600 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:16 crc kubenswrapper[4775]: E1125 19:35:16.846784 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:16 crc kubenswrapper[4775]: E1125 19:35:16.847015 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:16 crc kubenswrapper[4775]: E1125 19:35:16.847180 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.848839 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.848899 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.848919 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.848951 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.848974 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:16Z","lastTransitionTime":"2025-11-25T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.952713 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.952784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.952806 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.952839 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:16 crc kubenswrapper[4775]: I1125 19:35:16.952863 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:16Z","lastTransitionTime":"2025-11-25T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.056907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.056985 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.057012 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.057042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.057066 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:17Z","lastTransitionTime":"2025-11-25T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.160612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.160752 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.160783 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.160816 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.160834 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:17Z","lastTransitionTime":"2025-11-25T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.264371 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.264461 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.264486 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.264517 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.264541 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:17Z","lastTransitionTime":"2025-11-25T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.367791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.367905 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.367924 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.367950 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.367971 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:17Z","lastTransitionTime":"2025-11-25T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.471204 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.471274 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.471295 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.471323 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.471342 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:17Z","lastTransitionTime":"2025-11-25T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.574893 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.574965 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.574989 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.575021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.575047 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:17Z","lastTransitionTime":"2025-11-25T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.581547 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.581603 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.581627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.581698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.581725 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T19:35:17Z","lastTransitionTime":"2025-11-25T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.664775 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t"] Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.665469 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.669103 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.669184 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.669713 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.670360 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.736181 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8p9p9" podStartSLOduration=78.736145211 podStartE2EDuration="1m18.736145211s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:35:17.734896748 +0000 UTC m=+99.651259154" watchObservedRunningTime="2025-11-25 19:35:17.736145211 +0000 UTC m=+99.652507617" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.794895 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/aafe8733-1b72-4813-ac30-47340ddd82df-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.795086 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aafe8733-1b72-4813-ac30-47340ddd82df-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.795155 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aafe8733-1b72-4813-ac30-47340ddd82df-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.795209 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aafe8733-1b72-4813-ac30-47340ddd82df-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.795320 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/aafe8733-1b72-4813-ac30-47340ddd82df-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.797904 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=51.797879885 podStartE2EDuration="51.797879885s" podCreationTimestamp="2025-11-25 19:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:35:17.761124906 +0000 UTC m=+99.677487302" watchObservedRunningTime="2025-11-25 19:35:17.797879885 +0000 UTC m=+99.714242291" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.817899 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w98l4" podStartSLOduration=77.817861022 podStartE2EDuration="1m17.817861022s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:35:17.816930987 +0000 UTC m=+99.733293393" watchObservedRunningTime="2025-11-25 19:35:17.817861022 +0000 UTC m=+99.734223428" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.843673 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.843605816 podStartE2EDuration="1m17.843605816s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:35:17.843078303 +0000 UTC m=+99.759440739" watchObservedRunningTime="2025-11-25 19:35:17.843605816 +0000 UTC m=+99.759968222" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.846326 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:17 crc kubenswrapper[4775]: E1125 19:35:17.846531 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.897002 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aafe8733-1b72-4813-ac30-47340ddd82df-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.897082 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aafe8733-1b72-4813-ac30-47340ddd82df-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.897113 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aafe8733-1b72-4813-ac30-47340ddd82df-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.897171 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/aafe8733-1b72-4813-ac30-47340ddd82df-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.897237 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/aafe8733-1b72-4813-ac30-47340ddd82df-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.897341 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/aafe8733-1b72-4813-ac30-47340ddd82df-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.897398 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/aafe8733-1b72-4813-ac30-47340ddd82df-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.898535 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aafe8733-1b72-4813-ac30-47340ddd82df-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.906444 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aafe8733-1b72-4813-ac30-47340ddd82df-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.921217 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aafe8733-1b72-4813-ac30-47340ddd82df-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vgx9t\" (UID: \"aafe8733-1b72-4813-ac30-47340ddd82df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.930564 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-vwq64" podStartSLOduration=78.930542611 podStartE2EDuration="1m18.930542611s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:35:17.930238563 +0000 UTC m=+99.846600969" watchObservedRunningTime="2025-11-25 19:35:17.930542611 +0000 UTC m=+99.846904987" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.931155 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podStartSLOduration=78.931147487 podStartE2EDuration="1m18.931147487s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:35:17.907173627 +0000 UTC m=+99.823536003" watchObservedRunningTime="2025-11-25 19:35:17.931147487 +0000 UTC m=+99.847509863" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.956947 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-94nmx" podStartSLOduration=77.956924292 podStartE2EDuration="1m17.956924292s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:35:17.943064354 +0000 UTC m=+99.859426720" watchObservedRunningTime="2025-11-25 19:35:17.956924292 +0000 UTC m=+99.873286678" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.975699 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=73.975675596 podStartE2EDuration="1m13.975675596s" podCreationTimestamp="2025-11-25 19:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:35:17.974718222 +0000 UTC m=+99.891080598" watchObservedRunningTime="2025-11-25 19:35:17.975675596 +0000 UTC m=+99.892037972" Nov 25 19:35:17 crc kubenswrapper[4775]: I1125 19:35:17.988608 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" Nov 25 19:35:18 crc kubenswrapper[4775]: I1125 19:35:18.011048 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.01101671 podStartE2EDuration="3.01101671s" podCreationTimestamp="2025-11-25 19:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:35:18.010734192 +0000 UTC m=+99.927096598" watchObservedRunningTime="2025-11-25 19:35:18.01101671 +0000 UTC m=+99.927379116" Nov 25 19:35:18 crc kubenswrapper[4775]: I1125 19:35:18.061840 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-8qf2w" podStartSLOduration=79.061812081 podStartE2EDuration="1m19.061812081s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:35:18.061596236 +0000 UTC m=+99.977958612" watchObservedRunningTime="2025-11-25 19:35:18.061812081 +0000 UTC m=+99.978174457" Nov 25 19:35:18 crc kubenswrapper[4775]: I1125 19:35:18.402562 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:18 crc kubenswrapper[4775]: E1125 19:35:18.402864 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:35:18 crc kubenswrapper[4775]: E1125 19:35:18.403029 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs podName:f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f nodeName:}" failed. No retries permitted until 2025-11-25 19:36:22.402973102 +0000 UTC m=+164.319335678 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs") pod "network-metrics-daemon-69dvc" (UID: "f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 19:35:18 crc kubenswrapper[4775]: I1125 19:35:18.684760 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" event={"ID":"aafe8733-1b72-4813-ac30-47340ddd82df","Type":"ContainerStarted","Data":"716838111a675b8cef1f70bce2946d19e76b6a679c0e51ce4211c1443cecb545"} Nov 25 19:35:18 crc kubenswrapper[4775]: I1125 19:35:18.684828 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" event={"ID":"aafe8733-1b72-4813-ac30-47340ddd82df","Type":"ContainerStarted","Data":"58300fa697813eb75a0a680ab7d8048bfb8152dfdcdebf68d3926f7957399f7b"} Nov 25 19:35:18 crc kubenswrapper[4775]: I1125 19:35:18.708585 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=33.708558583 podStartE2EDuration="33.708558583s" podCreationTimestamp="2025-11-25 19:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:35:18.099564516 +0000 UTC m=+100.015926912" watchObservedRunningTime="2025-11-25 19:35:18.708558583 +0000 UTC m=+100.624920959" Nov 25 19:35:18 crc kubenswrapper[4775]: I1125 19:35:18.709269 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgx9t" podStartSLOduration=79.709260291 podStartE2EDuration="1m19.709260291s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:35:18.708780639 +0000 UTC m=+100.625143085" watchObservedRunningTime="2025-11-25 19:35:18.709260291 +0000 UTC m=+100.625622667" Nov 25 19:35:18 crc kubenswrapper[4775]: I1125 19:35:18.846973 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:18 crc kubenswrapper[4775]: I1125 19:35:18.846969 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:18 crc kubenswrapper[4775]: E1125 19:35:18.849253 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:18 crc kubenswrapper[4775]: I1125 19:35:18.849324 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:18 crc kubenswrapper[4775]: E1125 19:35:18.849553 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:18 crc kubenswrapper[4775]: E1125 19:35:18.849802 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:19 crc kubenswrapper[4775]: I1125 19:35:19.846388 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:19 crc kubenswrapper[4775]: E1125 19:35:19.846745 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:20 crc kubenswrapper[4775]: I1125 19:35:20.847091 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:20 crc kubenswrapper[4775]: I1125 19:35:20.847145 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:20 crc kubenswrapper[4775]: I1125 19:35:20.847190 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:20 crc kubenswrapper[4775]: E1125 19:35:20.848643 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:20 crc kubenswrapper[4775]: E1125 19:35:20.848783 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:20 crc kubenswrapper[4775]: E1125 19:35:20.848867 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:21 crc kubenswrapper[4775]: I1125 19:35:21.846606 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:21 crc kubenswrapper[4775]: E1125 19:35:21.847265 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:21 crc kubenswrapper[4775]: I1125 19:35:21.847769 4775 scope.go:117] "RemoveContainer" containerID="a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5" Nov 25 19:35:21 crc kubenswrapper[4775]: E1125 19:35:21.848011 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x28tq_openshift-ovn-kubernetes(1b02c35a-be66-4cf6-afc0-12ddc2f74148)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" Nov 25 19:35:22 crc kubenswrapper[4775]: I1125 19:35:22.846528 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:22 crc kubenswrapper[4775]: I1125 19:35:22.846602 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:22 crc kubenswrapper[4775]: I1125 19:35:22.846558 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:22 crc kubenswrapper[4775]: E1125 19:35:22.846840 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:22 crc kubenswrapper[4775]: E1125 19:35:22.846984 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:22 crc kubenswrapper[4775]: E1125 19:35:22.847072 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:23 crc kubenswrapper[4775]: I1125 19:35:23.846905 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:23 crc kubenswrapper[4775]: E1125 19:35:23.847706 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:24 crc kubenswrapper[4775]: I1125 19:35:24.846481 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:24 crc kubenswrapper[4775]: I1125 19:35:24.846503 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:24 crc kubenswrapper[4775]: I1125 19:35:24.847000 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:24 crc kubenswrapper[4775]: E1125 19:35:24.847214 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:24 crc kubenswrapper[4775]: E1125 19:35:24.847374 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:24 crc kubenswrapper[4775]: E1125 19:35:24.847587 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:25 crc kubenswrapper[4775]: I1125 19:35:25.846133 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:25 crc kubenswrapper[4775]: E1125 19:35:25.846449 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:26 crc kubenswrapper[4775]: I1125 19:35:26.847242 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:26 crc kubenswrapper[4775]: I1125 19:35:26.847297 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:26 crc kubenswrapper[4775]: I1125 19:35:26.847792 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:26 crc kubenswrapper[4775]: E1125 19:35:26.848216 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:26 crc kubenswrapper[4775]: E1125 19:35:26.848392 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:26 crc kubenswrapper[4775]: E1125 19:35:26.848481 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:27 crc kubenswrapper[4775]: I1125 19:35:27.846116 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:27 crc kubenswrapper[4775]: E1125 19:35:27.846303 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:28 crc kubenswrapper[4775]: I1125 19:35:28.846820 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:28 crc kubenswrapper[4775]: I1125 19:35:28.847023 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:28 crc kubenswrapper[4775]: E1125 19:35:28.848852 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:28 crc kubenswrapper[4775]: I1125 19:35:28.849012 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:28 crc kubenswrapper[4775]: E1125 19:35:28.849169 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:28 crc kubenswrapper[4775]: E1125 19:35:28.849354 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:29 crc kubenswrapper[4775]: I1125 19:35:29.846733 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:29 crc kubenswrapper[4775]: E1125 19:35:29.846941 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:30 crc kubenswrapper[4775]: I1125 19:35:30.846485 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:30 crc kubenswrapper[4775]: I1125 19:35:30.846723 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:30 crc kubenswrapper[4775]: I1125 19:35:30.847151 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:30 crc kubenswrapper[4775]: E1125 19:35:30.847137 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:30 crc kubenswrapper[4775]: E1125 19:35:30.847445 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:30 crc kubenswrapper[4775]: E1125 19:35:30.847712 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:31 crc kubenswrapper[4775]: I1125 19:35:31.850603 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:31 crc kubenswrapper[4775]: E1125 19:35:31.850910 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:32 crc kubenswrapper[4775]: I1125 19:35:32.846202 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:32 crc kubenswrapper[4775]: I1125 19:35:32.846384 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:32 crc kubenswrapper[4775]: I1125 19:35:32.846547 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:32 crc kubenswrapper[4775]: E1125 19:35:32.846849 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:32 crc kubenswrapper[4775]: E1125 19:35:32.847032 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:32 crc kubenswrapper[4775]: E1125 19:35:32.847405 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:33 crc kubenswrapper[4775]: I1125 19:35:33.846297 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:33 crc kubenswrapper[4775]: E1125 19:35:33.846572 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:34 crc kubenswrapper[4775]: I1125 19:35:34.752443 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qf2w_850f083c-ad86-47bb-8fd1-4f2a4a9e7831/kube-multus/1.log" Nov 25 19:35:34 crc kubenswrapper[4775]: I1125 19:35:34.753225 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qf2w_850f083c-ad86-47bb-8fd1-4f2a4a9e7831/kube-multus/0.log" Nov 25 19:35:34 crc kubenswrapper[4775]: I1125 19:35:34.753318 4775 generic.go:334] "Generic (PLEG): container finished" podID="850f083c-ad86-47bb-8fd1-4f2a4a9e7831" containerID="0214a60a160bcf831db4a80d10761356a50ea831420fe32966eb42ba3de54426" exitCode=1 Nov 25 19:35:34 crc kubenswrapper[4775]: I1125 19:35:34.753376 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qf2w" event={"ID":"850f083c-ad86-47bb-8fd1-4f2a4a9e7831","Type":"ContainerDied","Data":"0214a60a160bcf831db4a80d10761356a50ea831420fe32966eb42ba3de54426"} Nov 25 19:35:34 crc kubenswrapper[4775]: I1125 19:35:34.753441 4775 scope.go:117] "RemoveContainer" containerID="cb64697bf22e68802dee48532270e7bb8552f5534d37db295984e51e1b07f079" Nov 25 19:35:34 crc kubenswrapper[4775]: I1125 19:35:34.754152 4775 scope.go:117] "RemoveContainer" containerID="0214a60a160bcf831db4a80d10761356a50ea831420fe32966eb42ba3de54426" Nov 25 19:35:34 crc kubenswrapper[4775]: E1125 19:35:34.754444 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-8qf2w_openshift-multus(850f083c-ad86-47bb-8fd1-4f2a4a9e7831)\"" pod="openshift-multus/multus-8qf2w" podUID="850f083c-ad86-47bb-8fd1-4f2a4a9e7831" Nov 25 19:35:34 crc kubenswrapper[4775]: I1125 19:35:34.846982 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:34 crc kubenswrapper[4775]: E1125 19:35:34.847211 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:34 crc kubenswrapper[4775]: I1125 19:35:34.846992 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:34 crc kubenswrapper[4775]: E1125 19:35:34.848029 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:34 crc kubenswrapper[4775]: I1125 19:35:34.848385 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:34 crc kubenswrapper[4775]: I1125 19:35:34.848439 4775 scope.go:117] "RemoveContainer" containerID="a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5" Nov 25 19:35:34 crc kubenswrapper[4775]: E1125 19:35:34.848518 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:35 crc kubenswrapper[4775]: I1125 19:35:35.760769 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qf2w_850f083c-ad86-47bb-8fd1-4f2a4a9e7831/kube-multus/1.log" Nov 25 19:35:35 crc kubenswrapper[4775]: I1125 19:35:35.765563 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/3.log" Nov 25 19:35:35 crc kubenswrapper[4775]: I1125 19:35:35.770076 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerStarted","Data":"ad9a521dc5be99aef9e604fc3390074741172291c184e8a62c3b539a30d8964e"} Nov 25 19:35:35 crc kubenswrapper[4775]: I1125 19:35:35.770761 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:35:35 crc kubenswrapper[4775]: I1125 19:35:35.816273 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podStartSLOduration=96.816244582 podStartE2EDuration="1m36.816244582s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:35:35.813702796 +0000 UTC m=+117.730065222" watchObservedRunningTime="2025-11-25 19:35:35.816244582 +0000 UTC m=+117.732606988" Nov 25 19:35:35 crc kubenswrapper[4775]: I1125 19:35:35.846582 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:35 crc kubenswrapper[4775]: E1125 19:35:35.846851 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:35 crc kubenswrapper[4775]: I1125 19:35:35.858164 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-69dvc"] Nov 25 19:35:36 crc kubenswrapper[4775]: I1125 19:35:36.773858 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:36 crc kubenswrapper[4775]: E1125 19:35:36.774087 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:36 crc kubenswrapper[4775]: I1125 19:35:36.847013 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:36 crc kubenswrapper[4775]: E1125 19:35:36.847241 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:36 crc kubenswrapper[4775]: I1125 19:35:36.847520 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:36 crc kubenswrapper[4775]: I1125 19:35:36.847540 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:36 crc kubenswrapper[4775]: E1125 19:35:36.847816 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:36 crc kubenswrapper[4775]: E1125 19:35:36.848029 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:37 crc kubenswrapper[4775]: I1125 19:35:37.846555 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:37 crc kubenswrapper[4775]: E1125 19:35:37.846838 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:38 crc kubenswrapper[4775]: E1125 19:35:38.757764 4775 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 25 19:35:38 crc kubenswrapper[4775]: I1125 19:35:38.846881 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:38 crc kubenswrapper[4775]: I1125 19:35:38.846893 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:38 crc kubenswrapper[4775]: I1125 19:35:38.847004 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:38 crc kubenswrapper[4775]: E1125 19:35:38.848985 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:38 crc kubenswrapper[4775]: E1125 19:35:38.849217 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:38 crc kubenswrapper[4775]: E1125 19:35:38.849414 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:38 crc kubenswrapper[4775]: E1125 19:35:38.952624 4775 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 19:35:39 crc kubenswrapper[4775]: I1125 19:35:39.846361 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:39 crc kubenswrapper[4775]: E1125 19:35:39.846537 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:40 crc kubenswrapper[4775]: I1125 19:35:40.846324 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:40 crc kubenswrapper[4775]: I1125 19:35:40.846551 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:40 crc kubenswrapper[4775]: E1125 19:35:40.846754 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:40 crc kubenswrapper[4775]: E1125 19:35:40.846999 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:40 crc kubenswrapper[4775]: I1125 19:35:40.847558 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:40 crc kubenswrapper[4775]: E1125 19:35:40.847765 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:41 crc kubenswrapper[4775]: I1125 19:35:41.846520 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:41 crc kubenswrapper[4775]: E1125 19:35:41.846883 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:42 crc kubenswrapper[4775]: I1125 19:35:42.846304 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:42 crc kubenswrapper[4775]: E1125 19:35:42.846492 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:42 crc kubenswrapper[4775]: I1125 19:35:42.846574 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:42 crc kubenswrapper[4775]: I1125 19:35:42.846717 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:42 crc kubenswrapper[4775]: E1125 19:35:42.846774 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:42 crc kubenswrapper[4775]: E1125 19:35:42.846968 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:43 crc kubenswrapper[4775]: I1125 19:35:43.846907 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:43 crc kubenswrapper[4775]: E1125 19:35:43.847482 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:43 crc kubenswrapper[4775]: E1125 19:35:43.954103 4775 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 19:35:44 crc kubenswrapper[4775]: I1125 19:35:44.846574 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:44 crc kubenswrapper[4775]: I1125 19:35:44.846704 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:44 crc kubenswrapper[4775]: E1125 19:35:44.846848 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:44 crc kubenswrapper[4775]: I1125 19:35:44.846979 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:44 crc kubenswrapper[4775]: E1125 19:35:44.847075 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:44 crc kubenswrapper[4775]: E1125 19:35:44.847245 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:45 crc kubenswrapper[4775]: I1125 19:35:45.846921 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:45 crc kubenswrapper[4775]: E1125 19:35:45.847150 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:46 crc kubenswrapper[4775]: I1125 19:35:46.847901 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:46 crc kubenswrapper[4775]: I1125 19:35:46.847958 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:46 crc kubenswrapper[4775]: I1125 19:35:46.847931 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:46 crc kubenswrapper[4775]: E1125 19:35:46.848169 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:46 crc kubenswrapper[4775]: E1125 19:35:46.848403 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:46 crc kubenswrapper[4775]: E1125 19:35:46.848928 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:47 crc kubenswrapper[4775]: I1125 19:35:47.846280 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:47 crc kubenswrapper[4775]: E1125 19:35:47.846551 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:48 crc kubenswrapper[4775]: I1125 19:35:48.846199 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:48 crc kubenswrapper[4775]: I1125 19:35:48.846247 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:48 crc kubenswrapper[4775]: I1125 19:35:48.846350 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:48 crc kubenswrapper[4775]: E1125 19:35:48.848051 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:48 crc kubenswrapper[4775]: E1125 19:35:48.848202 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:48 crc kubenswrapper[4775]: E1125 19:35:48.848346 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:48 crc kubenswrapper[4775]: E1125 19:35:48.955543 4775 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 19:35:49 crc kubenswrapper[4775]: I1125 19:35:49.846941 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:49 crc kubenswrapper[4775]: E1125 19:35:49.847167 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:49 crc kubenswrapper[4775]: I1125 19:35:49.847832 4775 scope.go:117] "RemoveContainer" containerID="0214a60a160bcf831db4a80d10761356a50ea831420fe32966eb42ba3de54426" Nov 25 19:35:50 crc kubenswrapper[4775]: I1125 19:35:50.835148 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qf2w_850f083c-ad86-47bb-8fd1-4f2a4a9e7831/kube-multus/1.log" Nov 25 19:35:50 crc kubenswrapper[4775]: I1125 19:35:50.835874 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qf2w" event={"ID":"850f083c-ad86-47bb-8fd1-4f2a4a9e7831","Type":"ContainerStarted","Data":"52ab40ee9cac20b78eb96a25e72b9de04e6dfb11304ceee78c10e6b026448e61"} Nov 25 19:35:50 crc kubenswrapper[4775]: I1125 19:35:50.846463 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:50 crc kubenswrapper[4775]: E1125 19:35:50.846699 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:50 crc kubenswrapper[4775]: I1125 19:35:50.846975 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:50 crc kubenswrapper[4775]: E1125 19:35:50.847075 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:50 crc kubenswrapper[4775]: I1125 19:35:50.847312 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:50 crc kubenswrapper[4775]: E1125 19:35:50.847438 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:51 crc kubenswrapper[4775]: I1125 19:35:51.846477 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:51 crc kubenswrapper[4775]: E1125 19:35:51.846717 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:52 crc kubenswrapper[4775]: I1125 19:35:52.846013 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:52 crc kubenswrapper[4775]: I1125 19:35:52.846079 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:52 crc kubenswrapper[4775]: E1125 19:35:52.846262 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 19:35:52 crc kubenswrapper[4775]: I1125 19:35:52.846398 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:52 crc kubenswrapper[4775]: E1125 19:35:52.846602 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 19:35:52 crc kubenswrapper[4775]: E1125 19:35:52.846721 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 19:35:53 crc kubenswrapper[4775]: I1125 19:35:53.846111 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:53 crc kubenswrapper[4775]: E1125 19:35:53.846353 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69dvc" podUID="f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f" Nov 25 19:35:54 crc kubenswrapper[4775]: I1125 19:35:54.846476 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:35:54 crc kubenswrapper[4775]: I1125 19:35:54.846519 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:35:54 crc kubenswrapper[4775]: I1125 19:35:54.846808 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:35:54 crc kubenswrapper[4775]: I1125 19:35:54.850367 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 19:35:54 crc kubenswrapper[4775]: I1125 19:35:54.850607 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 19:35:54 crc kubenswrapper[4775]: I1125 19:35:54.850777 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 19:35:54 crc kubenswrapper[4775]: I1125 19:35:54.851158 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 19:35:55 crc kubenswrapper[4775]: I1125 19:35:55.846805 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:35:55 crc kubenswrapper[4775]: I1125 19:35:55.851551 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 19:35:55 crc kubenswrapper[4775]: I1125 19:35:55.852126 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.566454 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.621439 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pdqsx"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.622366 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.626695 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.628877 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.633978 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.634716 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.638390 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.641246 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.643735 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.644942 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.646094 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.647865 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.652540 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.652720 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.652855 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.653087 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.667715 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.670236 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.677427 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.677905 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.678812 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.679676 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.680403 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.681841 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2n2wg"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.682745 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.683005 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-2c2hp"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.683500 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.683789 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pzkwp"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.684339 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.692926 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q5ml6"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.693880 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l975q"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.694593 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.695148 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.701125 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xnxgj"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.703184 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8ldr6"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.703579 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.703834 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xnxgj" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.703985 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.704197 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.723282 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8dhhr"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.724119 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zzjj4"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.724910 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.725487 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.735426 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-75q9h"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.735986 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.736443 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.737171 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.737261 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.737472 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.738206 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pbchx"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.738615 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pbchx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.739076 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.739448 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.742156 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.743010 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.744800 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.744905 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.758312 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-7h68s"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.760150 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.764825 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7h68s" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.766938 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.767827 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.769390 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-vxrkp"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.770249 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.770519 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.770905 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.771853 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.776429 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.776831 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.777020 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.777288 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.778852 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.779229 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.779405 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.779714 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.779917 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.780054 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.780280 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.780396 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.780774 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.780924 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.789750 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwm6g\" (UniqueName: \"kubernetes.io/projected/12ce8247-daa8-42ce-90f4-b39317ca8583-kube-api-access-xwm6g\") pod \"cluster-samples-operator-665b6dd947-ws2gh\" (UID: \"12ce8247-daa8-42ce-90f4-b39317ca8583\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.789801 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c338a67-e4f2-49d1-a75b-1db89500dfd1-config\") pod \"console-operator-58897d9998-pzkwp\" (UID: \"4c338a67-e4f2-49d1-a75b-1db89500dfd1\") " pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.789826 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e865c9de-8fd2-4b09-854c-0426a35d3290-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q5ml6\" (UID: \"e865c9de-8fd2-4b09-854c-0426a35d3290\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.789843 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4rph\" (UniqueName: \"kubernetes.io/projected/7ee4a869-0151-49bd-bde4-34be52d97b8d-kube-api-access-x4rph\") pod \"openshift-config-operator-7777fb866f-l975q\" (UID: \"7ee4a869-0151-49bd-bde4-34be52d97b8d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.789858 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/12ce8247-daa8-42ce-90f4-b39317ca8583-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ws2gh\" (UID: \"12ce8247-daa8-42ce-90f4-b39317ca8583\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.789885 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-client-ca\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.789900 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq4mf\" (UniqueName: \"kubernetes.io/projected/ec7a3c48-29be-4d48-b897-1b84a51e1583-kube-api-access-lq4mf\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.789916 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25947be7-09e9-475c-a477-90b964a3c16e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5d9l\" (UID: \"25947be7-09e9-475c-a477-90b964a3c16e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.789932 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lvd6\" (UniqueName: \"kubernetes.io/projected/e865c9de-8fd2-4b09-854c-0426a35d3290-kube-api-access-4lvd6\") pod \"machine-api-operator-5694c8668f-q5ml6\" (UID: \"e865c9de-8fd2-4b09-854c-0426a35d3290\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.789952 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/145a12a6-6592-4f46-9b71-4db14ccb3faa-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gptl7\" (UID: \"145a12a6-6592-4f46-9b71-4db14ccb3faa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.789968 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ec7a3c48-29be-4d48-b897-1b84a51e1583-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.789986 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ee4a869-0151-49bd-bde4-34be52d97b8d-serving-cert\") pod \"openshift-config-operator-7777fb866f-l975q\" (UID: \"7ee4a869-0151-49bd-bde4-34be52d97b8d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790032 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tclqb\" (UniqueName: \"kubernetes.io/projected/0d24c230-e34e-4509-bba0-86d680714e25-kube-api-access-tclqb\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790048 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98afa2eb-287e-4c9f-98d4-9b21849b04a4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dctvs\" (UID: \"98afa2eb-287e-4c9f-98d4-9b21849b04a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790062 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ec7a3c48-29be-4d48-b897-1b84a51e1583-etcd-client\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790079 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0a8caaab-1fce-46d6-8d6d-316903e159de-etcd-client\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790100 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0a8caaab-1fce-46d6-8d6d-316903e159de-etcd-ca\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790116 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6-metrics-tls\") pod \"dns-operator-744455d44c-xnxgj\" (UID: \"c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6\") " pod="openshift-dns-operator/dns-operator-744455d44c-xnxgj" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790131 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec7a3c48-29be-4d48-b897-1b84a51e1583-serving-cert\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790147 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-serving-cert\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790169 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990bfa85-5063-451b-a3c1-13a918a2069d-config\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790187 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-oauth-serving-cert\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790201 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/990bfa85-5063-451b-a3c1-13a918a2069d-service-ca-bundle\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790226 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-oauth-config\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790242 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6865cd6d-f340-4084-9efe-388f7744d93a-client-ca\") pod \"route-controller-manager-6576b87f9c-bw9d5\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790266 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a8caaab-1fce-46d6-8d6d-316903e159de-serving-cert\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790282 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7ee4a869-0151-49bd-bde4-34be52d97b8d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l975q\" (UID: \"7ee4a869-0151-49bd-bde4-34be52d97b8d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790303 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec7a3c48-29be-4d48-b897-1b84a51e1583-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790320 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-service-ca\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790337 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a8caaab-1fce-46d6-8d6d-316903e159de-etcd-service-ca\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790354 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a8caaab-1fce-46d6-8d6d-316903e159de-config\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790362 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790370 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec7a3c48-29be-4d48-b897-1b84a51e1583-audit-dir\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790507 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790540 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98afa2eb-287e-4c9f-98d4-9b21849b04a4-config\") pod \"kube-apiserver-operator-766d6c64bb-dctvs\" (UID: \"98afa2eb-287e-4c9f-98d4-9b21849b04a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790574 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6865cd6d-f340-4084-9efe-388f7744d93a-serving-cert\") pod \"route-controller-manager-6576b87f9c-bw9d5\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790597 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec7a3c48-29be-4d48-b897-1b84a51e1583-audit-policies\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790659 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ec7a3c48-29be-4d48-b897-1b84a51e1583-encryption-config\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790712 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8llt\" (UniqueName: \"kubernetes.io/projected/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-kube-api-access-g8llt\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790749 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790773 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47lz7\" (UniqueName: \"kubernetes.io/projected/990bfa85-5063-451b-a3c1-13a918a2069d-kube-api-access-47lz7\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790834 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e865c9de-8fd2-4b09-854c-0426a35d3290-config\") pod \"machine-api-operator-5694c8668f-q5ml6\" (UID: \"e865c9de-8fd2-4b09-854c-0426a35d3290\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790886 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltsxt\" (UniqueName: \"kubernetes.io/projected/6865cd6d-f340-4084-9efe-388f7744d93a-kube-api-access-ltsxt\") pod \"route-controller-manager-6576b87f9c-bw9d5\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.790989 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d24c230-e34e-4509-bba0-86d680714e25-serving-cert\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791011 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/990bfa85-5063-451b-a3c1-13a918a2069d-serving-cert\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791029 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98afa2eb-287e-4c9f-98d4-9b21849b04a4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dctvs\" (UID: \"98afa2eb-287e-4c9f-98d4-9b21849b04a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791124 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791117 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c338a67-e4f2-49d1-a75b-1db89500dfd1-trusted-ca\") pod \"console-operator-58897d9998-pzkwp\" (UID: \"4c338a67-e4f2-49d1-a75b-1db89500dfd1\") " pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791164 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/145a12a6-6592-4f46-9b71-4db14ccb3faa-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gptl7\" (UID: \"145a12a6-6592-4f46-9b71-4db14ccb3faa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791184 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6r84\" (UniqueName: \"kubernetes.io/projected/145a12a6-6592-4f46-9b71-4db14ccb3faa-kube-api-access-j6r84\") pod \"openshift-apiserver-operator-796bbdcf4f-gptl7\" (UID: \"145a12a6-6592-4f46-9b71-4db14ccb3faa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791198 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c338a67-e4f2-49d1-a75b-1db89500dfd1-serving-cert\") pod \"console-operator-58897d9998-pzkwp\" (UID: \"4c338a67-e4f2-49d1-a75b-1db89500dfd1\") " pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791213 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25947be7-09e9-475c-a477-90b964a3c16e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5d9l\" (UID: \"25947be7-09e9-475c-a477-90b964a3c16e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791230 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wzpr\" (UniqueName: \"kubernetes.io/projected/0a8caaab-1fce-46d6-8d6d-316903e159de-kube-api-access-9wzpr\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791248 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e865c9de-8fd2-4b09-854c-0426a35d3290-images\") pod \"machine-api-operator-5694c8668f-q5ml6\" (UID: \"e865c9de-8fd2-4b09-854c-0426a35d3290\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791264 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-trusted-ca-bundle\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791284 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrsm9\" (UniqueName: \"kubernetes.io/projected/25947be7-09e9-475c-a477-90b964a3c16e-kube-api-access-wrsm9\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5d9l\" (UID: \"25947be7-09e9-475c-a477-90b964a3c16e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791301 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-config\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791320 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/990bfa85-5063-451b-a3c1-13a918a2069d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791341 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq4g9\" (UniqueName: \"kubernetes.io/projected/4c338a67-e4f2-49d1-a75b-1db89500dfd1-kube-api-access-xq4g9\") pod \"console-operator-58897d9998-pzkwp\" (UID: \"4c338a67-e4f2-49d1-a75b-1db89500dfd1\") " pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791247 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791372 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-config\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791477 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6865cd6d-f340-4084-9efe-388f7744d93a-config\") pod \"route-controller-manager-6576b87f9c-bw9d5\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791503 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvkzz\" (UniqueName: \"kubernetes.io/projected/c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6-kube-api-access-jvkzz\") pod \"dns-operator-744455d44c-xnxgj\" (UID: \"c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6\") " pod="openshift-dns-operator/dns-operator-744455d44c-xnxgj" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.791809 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.792193 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.792463 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.800200 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.808304 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.810664 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.810952 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.811156 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.811304 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.813420 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.820415 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.820601 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.820799 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.821067 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.825902 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.826333 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.828271 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.828485 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.828633 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.833096 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5h4vj"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.837682 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.829927 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.838173 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-spkbv"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.829981 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.830031 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.830143 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.838593 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831118 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831245 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831281 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831331 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831389 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831415 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.839044 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.839050 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831444 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831472 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831506 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831535 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831554 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831591 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831622 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831680 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831720 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831732 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831771 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831783 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831819 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831851 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831854 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831885 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831898 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831928 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831944 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831960 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831967 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832000 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.831998 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832036 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832053 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832070 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832083 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832097 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832102 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832126 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832142 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832169 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832207 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832247 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832310 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832317 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832365 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832370 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832413 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832443 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.832488 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.833068 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.833773 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.834157 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.835449 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.836744 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.840320 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.845787 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.845885 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wzwk5"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.855459 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.855585 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.858946 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.861661 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.870018 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.880743 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.888962 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.891698 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.891874 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.893234 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mzblf"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.893462 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894060 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wzpr\" (UniqueName: \"kubernetes.io/projected/0a8caaab-1fce-46d6-8d6d-316903e159de-kube-api-access-9wzpr\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894107 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e865c9de-8fd2-4b09-854c-0426a35d3290-images\") pod \"machine-api-operator-5694c8668f-q5ml6\" (UID: \"e865c9de-8fd2-4b09-854c-0426a35d3290\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894129 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-trusted-ca-bundle\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894146 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrsm9\" (UniqueName: \"kubernetes.io/projected/25947be7-09e9-475c-a477-90b964a3c16e-kube-api-access-wrsm9\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5d9l\" (UID: \"25947be7-09e9-475c-a477-90b964a3c16e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894165 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-config\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894185 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/990bfa85-5063-451b-a3c1-13a918a2069d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894203 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq4g9\" (UniqueName: \"kubernetes.io/projected/4c338a67-e4f2-49d1-a75b-1db89500dfd1-kube-api-access-xq4g9\") pod \"console-operator-58897d9998-pzkwp\" (UID: \"4c338a67-e4f2-49d1-a75b-1db89500dfd1\") " pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894242 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-config\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894263 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bh2t\" (UniqueName: \"kubernetes.io/projected/57313bf3-1361-49f7-9a66-922b42ea36e7-kube-api-access-8bh2t\") pod \"control-plane-machine-set-operator-78cbb6b69f-g582m\" (UID: \"57313bf3-1361-49f7-9a66-922b42ea36e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894284 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6865cd6d-f340-4084-9efe-388f7744d93a-config\") pod \"route-controller-manager-6576b87f9c-bw9d5\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894303 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvkzz\" (UniqueName: \"kubernetes.io/projected/c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6-kube-api-access-jvkzz\") pod \"dns-operator-744455d44c-xnxgj\" (UID: \"c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6\") " pod="openshift-dns-operator/dns-operator-744455d44c-xnxgj" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894323 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwm6g\" (UniqueName: \"kubernetes.io/projected/12ce8247-daa8-42ce-90f4-b39317ca8583-kube-api-access-xwm6g\") pod \"cluster-samples-operator-665b6dd947-ws2gh\" (UID: \"12ce8247-daa8-42ce-90f4-b39317ca8583\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894339 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c338a67-e4f2-49d1-a75b-1db89500dfd1-config\") pod \"console-operator-58897d9998-pzkwp\" (UID: \"4c338a67-e4f2-49d1-a75b-1db89500dfd1\") " pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894357 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/12ce8247-daa8-42ce-90f4-b39317ca8583-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ws2gh\" (UID: \"12ce8247-daa8-42ce-90f4-b39317ca8583\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894375 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e865c9de-8fd2-4b09-854c-0426a35d3290-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q5ml6\" (UID: \"e865c9de-8fd2-4b09-854c-0426a35d3290\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894394 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4rph\" (UniqueName: \"kubernetes.io/projected/7ee4a869-0151-49bd-bde4-34be52d97b8d-kube-api-access-x4rph\") pod \"openshift-config-operator-7777fb866f-l975q\" (UID: \"7ee4a869-0151-49bd-bde4-34be52d97b8d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894413 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f-trusted-ca\") pod \"ingress-operator-5b745b69d9-hlhkz\" (UID: \"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894441 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-client-ca\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894461 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4mf\" (UniqueName: \"kubernetes.io/projected/ec7a3c48-29be-4d48-b897-1b84a51e1583-kube-api-access-lq4mf\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894478 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lvd6\" (UniqueName: \"kubernetes.io/projected/e865c9de-8fd2-4b09-854c-0426a35d3290-kube-api-access-4lvd6\") pod \"machine-api-operator-5694c8668f-q5ml6\" (UID: \"e865c9de-8fd2-4b09-854c-0426a35d3290\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894496 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25947be7-09e9-475c-a477-90b964a3c16e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5d9l\" (UID: \"25947be7-09e9-475c-a477-90b964a3c16e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894515 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/145a12a6-6592-4f46-9b71-4db14ccb3faa-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gptl7\" (UID: \"145a12a6-6592-4f46-9b71-4db14ccb3faa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894533 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ec7a3c48-29be-4d48-b897-1b84a51e1583-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894556 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ee4a869-0151-49bd-bde4-34be52d97b8d-serving-cert\") pod \"openshift-config-operator-7777fb866f-l975q\" (UID: \"7ee4a869-0151-49bd-bde4-34be52d97b8d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894573 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tclqb\" (UniqueName: \"kubernetes.io/projected/0d24c230-e34e-4509-bba0-86d680714e25-kube-api-access-tclqb\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894595 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98afa2eb-287e-4c9f-98d4-9b21849b04a4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dctvs\" (UID: \"98afa2eb-287e-4c9f-98d4-9b21849b04a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894613 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ec7a3c48-29be-4d48-b897-1b84a51e1583-etcd-client\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894629 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0a8caaab-1fce-46d6-8d6d-316903e159de-etcd-client\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894661 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0a8caaab-1fce-46d6-8d6d-316903e159de-etcd-ca\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894678 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6-metrics-tls\") pod \"dns-operator-744455d44c-xnxgj\" (UID: \"c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6\") " pod="openshift-dns-operator/dns-operator-744455d44c-xnxgj" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894693 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec7a3c48-29be-4d48-b897-1b84a51e1583-serving-cert\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894711 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-serving-cert\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894733 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990bfa85-5063-451b-a3c1-13a918a2069d-config\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894752 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/990bfa85-5063-451b-a3c1-13a918a2069d-service-ca-bundle\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894766 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-oauth-serving-cert\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894786 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/57313bf3-1361-49f7-9a66-922b42ea36e7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g582m\" (UID: \"57313bf3-1361-49f7-9a66-922b42ea36e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894812 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-oauth-config\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894828 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6865cd6d-f340-4084-9efe-388f7744d93a-client-ca\") pod \"route-controller-manager-6576b87f9c-bw9d5\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894907 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7ee4a869-0151-49bd-bde4-34be52d97b8d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l975q\" (UID: \"7ee4a869-0151-49bd-bde4-34be52d97b8d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894927 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a8caaab-1fce-46d6-8d6d-316903e159de-serving-cert\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec7a3c48-29be-4d48-b897-1b84a51e1583-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894972 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-service-ca\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.894988 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a8caaab-1fce-46d6-8d6d-316903e159de-etcd-service-ca\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895007 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec7a3c48-29be-4d48-b897-1b84a51e1583-audit-dir\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895023 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a8caaab-1fce-46d6-8d6d-316903e159de-config\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895040 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f-metrics-tls\") pod \"ingress-operator-5b745b69d9-hlhkz\" (UID: \"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895058 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98afa2eb-287e-4c9f-98d4-9b21849b04a4-config\") pod \"kube-apiserver-operator-766d6c64bb-dctvs\" (UID: \"98afa2eb-287e-4c9f-98d4-9b21849b04a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895075 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6865cd6d-f340-4084-9efe-388f7744d93a-serving-cert\") pod \"route-controller-manager-6576b87f9c-bw9d5\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895090 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec7a3c48-29be-4d48-b897-1b84a51e1583-audit-policies\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895107 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hlhkz\" (UID: \"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895123 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdqf6\" (UniqueName: \"kubernetes.io/projected/061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f-kube-api-access-rdqf6\") pod \"ingress-operator-5b745b69d9-hlhkz\" (UID: \"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895141 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ec7a3c48-29be-4d48-b897-1b84a51e1583-encryption-config\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895160 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8llt\" (UniqueName: \"kubernetes.io/projected/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-kube-api-access-g8llt\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895182 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895199 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47lz7\" (UniqueName: \"kubernetes.io/projected/990bfa85-5063-451b-a3c1-13a918a2069d-kube-api-access-47lz7\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895216 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e865c9de-8fd2-4b09-854c-0426a35d3290-config\") pod \"machine-api-operator-5694c8668f-q5ml6\" (UID: \"e865c9de-8fd2-4b09-854c-0426a35d3290\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895233 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsxt\" (UniqueName: \"kubernetes.io/projected/6865cd6d-f340-4084-9efe-388f7744d93a-kube-api-access-ltsxt\") pod \"route-controller-manager-6576b87f9c-bw9d5\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895256 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d24c230-e34e-4509-bba0-86d680714e25-serving-cert\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895273 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/990bfa85-5063-451b-a3c1-13a918a2069d-serving-cert\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895289 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98afa2eb-287e-4c9f-98d4-9b21849b04a4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dctvs\" (UID: \"98afa2eb-287e-4c9f-98d4-9b21849b04a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895305 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c338a67-e4f2-49d1-a75b-1db89500dfd1-trusted-ca\") pod \"console-operator-58897d9998-pzkwp\" (UID: \"4c338a67-e4f2-49d1-a75b-1db89500dfd1\") " pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895320 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25947be7-09e9-475c-a477-90b964a3c16e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5d9l\" (UID: \"25947be7-09e9-475c-a477-90b964a3c16e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895337 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/145a12a6-6592-4f46-9b71-4db14ccb3faa-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gptl7\" (UID: \"145a12a6-6592-4f46-9b71-4db14ccb3faa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895356 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6r84\" (UniqueName: \"kubernetes.io/projected/145a12a6-6592-4f46-9b71-4db14ccb3faa-kube-api-access-j6r84\") pod \"openshift-apiserver-operator-796bbdcf4f-gptl7\" (UID: \"145a12a6-6592-4f46-9b71-4db14ccb3faa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.895371 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c338a67-e4f2-49d1-a75b-1db89500dfd1-serving-cert\") pod \"console-operator-58897d9998-pzkwp\" (UID: \"4c338a67-e4f2-49d1-a75b-1db89500dfd1\") " pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.897008 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.898181 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e865c9de-8fd2-4b09-854c-0426a35d3290-images\") pod \"machine-api-operator-5694c8668f-q5ml6\" (UID: \"e865c9de-8fd2-4b09-854c-0426a35d3290\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.898436 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-client-ca\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.899471 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-trusted-ca-bundle\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.899980 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.900529 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-config\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.901157 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.901373 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/990bfa85-5063-451b-a3c1-13a918a2069d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.901700 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pdqsx"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.901774 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-76xlm"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.901975 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.902001 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-config\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.902279 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mzblf" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.902936 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.903027 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.903090 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.903154 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2c2hp"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.903292 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.903372 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.903462 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.903486 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2n2wg"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.903890 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0a8caaab-1fce-46d6-8d6d-316903e159de-etcd-ca\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.904848 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l975q"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.904839 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6865cd6d-f340-4084-9efe-388f7744d93a-config\") pod \"route-controller-manager-6576b87f9c-bw9d5\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.905332 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ec7a3c48-29be-4d48-b897-1b84a51e1583-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.905752 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec7a3c48-29be-4d48-b897-1b84a51e1583-audit-policies\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.905958 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990bfa85-5063-451b-a3c1-13a918a2069d-config\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.906172 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.906710 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/990bfa85-5063-451b-a3c1-13a918a2069d-service-ca-bundle\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.906788 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ee4a869-0151-49bd-bde4-34be52d97b8d-serving-cert\") pod \"openshift-config-operator-7777fb866f-l975q\" (UID: \"7ee4a869-0151-49bd-bde4-34be52d97b8d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.907528 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-oauth-serving-cert\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.907334 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c338a67-e4f2-49d1-a75b-1db89500dfd1-config\") pod \"console-operator-58897d9998-pzkwp\" (UID: \"4c338a67-e4f2-49d1-a75b-1db89500dfd1\") " pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.908371 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6865cd6d-f340-4084-9efe-388f7744d93a-client-ca\") pod \"route-controller-manager-6576b87f9c-bw9d5\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.908695 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7ee4a869-0151-49bd-bde4-34be52d97b8d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l975q\" (UID: \"7ee4a869-0151-49bd-bde4-34be52d97b8d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.908780 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.908851 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec7a3c48-29be-4d48-b897-1b84a51e1583-audit-dir\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.909048 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.909296 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q5ml6"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.909446 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec7a3c48-29be-4d48-b897-1b84a51e1583-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.910053 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-oauth-config\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.910467 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c338a67-e4f2-49d1-a75b-1db89500dfd1-serving-cert\") pod \"console-operator-58897d9998-pzkwp\" (UID: \"4c338a67-e4f2-49d1-a75b-1db89500dfd1\") " pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.911209 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a8caaab-1fce-46d6-8d6d-316903e159de-etcd-service-ca\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.912318 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e865c9de-8fd2-4b09-854c-0426a35d3290-config\") pod \"machine-api-operator-5694c8668f-q5ml6\" (UID: \"e865c9de-8fd2-4b09-854c-0426a35d3290\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.912478 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c338a67-e4f2-49d1-a75b-1db89500dfd1-trusted-ca\") pod \"console-operator-58897d9998-pzkwp\" (UID: \"4c338a67-e4f2-49d1-a75b-1db89500dfd1\") " pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.912528 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25947be7-09e9-475c-a477-90b964a3c16e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5d9l\" (UID: \"25947be7-09e9-475c-a477-90b964a3c16e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.912984 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/12ce8247-daa8-42ce-90f4-b39317ca8583-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ws2gh\" (UID: \"12ce8247-daa8-42ce-90f4-b39317ca8583\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.913458 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ec7a3c48-29be-4d48-b897-1b84a51e1583-encryption-config\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.913470 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.914057 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.914214 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98afa2eb-287e-4c9f-98d4-9b21849b04a4-config\") pod \"kube-apiserver-operator-766d6c64bb-dctvs\" (UID: \"98afa2eb-287e-4c9f-98d4-9b21849b04a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.914458 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e865c9de-8fd2-4b09-854c-0426a35d3290-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q5ml6\" (UID: \"e865c9de-8fd2-4b09-854c-0426a35d3290\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.914540 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/145a12a6-6592-4f46-9b71-4db14ccb3faa-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gptl7\" (UID: \"145a12a6-6592-4f46-9b71-4db14ccb3faa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.915275 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/990bfa85-5063-451b-a3c1-13a918a2069d-serving-cert\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.915276 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a8caaab-1fce-46d6-8d6d-316903e159de-config\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.916205 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5h4vj"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.916913 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-service-ca\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.918276 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-serving-cert\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.918798 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d24c230-e34e-4509-bba0-86d680714e25-serving-cert\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.918894 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xnxgj"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.920141 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.921159 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8dhhr"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.922120 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pbchx"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.923108 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8ldr6"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.924123 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.925451 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a8caaab-1fce-46d6-8d6d-316903e159de-serving-cert\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.925493 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pzkwp"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.928190 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.928459 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zzjj4"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.928505 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-m88v8"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.935966 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25947be7-09e9-475c-a477-90b964a3c16e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5d9l\" (UID: \"25947be7-09e9-475c-a477-90b964a3c16e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.936281 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6865cd6d-f340-4084-9efe-388f7744d93a-serving-cert\") pod \"route-controller-manager-6576b87f9c-bw9d5\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.936532 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98afa2eb-287e-4c9f-98d4-9b21849b04a4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dctvs\" (UID: \"98afa2eb-287e-4c9f-98d4-9b21849b04a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.936689 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ec7a3c48-29be-4d48-b897-1b84a51e1583-etcd-client\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.937806 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7h68s"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.938557 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0a8caaab-1fce-46d6-8d6d-316903e159de-etcd-client\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.938627 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m88v8" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.939120 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6-metrics-tls\") pod \"dns-operator-744455d44c-xnxgj\" (UID: \"c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6\") " pod="openshift-dns-operator/dns-operator-744455d44c-xnxgj" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.939229 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/145a12a6-6592-4f46-9b71-4db14ccb3faa-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gptl7\" (UID: \"145a12a6-6592-4f46-9b71-4db14ccb3faa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.939440 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.945030 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.947823 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.951263 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-spkbv"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.952492 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.955342 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.958430 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.959480 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec7a3c48-29be-4d48-b897-1b84a51e1583-serving-cert\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.959860 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-75q9h"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.961336 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.961989 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.963070 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wzwk5"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.965286 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.966121 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-76xlm"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.967340 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.968643 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mzblf"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.969871 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.970312 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.971497 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m88v8"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.973025 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.974192 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-ts8lq"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.975545 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sl6bq"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.975859 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ts8lq" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.976828 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sl6bq"] Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.976994 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sl6bq" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.983767 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.996578 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/57313bf3-1361-49f7-9a66-922b42ea36e7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g582m\" (UID: \"57313bf3-1361-49f7-9a66-922b42ea36e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.996637 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f-metrics-tls\") pod \"ingress-operator-5b745b69d9-hlhkz\" (UID: \"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.996688 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hlhkz\" (UID: \"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.996708 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdqf6\" (UniqueName: \"kubernetes.io/projected/061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f-kube-api-access-rdqf6\") pod \"ingress-operator-5b745b69d9-hlhkz\" (UID: \"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.996803 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bh2t\" (UniqueName: \"kubernetes.io/projected/57313bf3-1361-49f7-9a66-922b42ea36e7-kube-api-access-8bh2t\") pod \"control-plane-machine-set-operator-78cbb6b69f-g582m\" (UID: \"57313bf3-1361-49f7-9a66-922b42ea36e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m" Nov 25 19:35:58 crc kubenswrapper[4775]: I1125 19:35:58.996841 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f-trusted-ca\") pod \"ingress-operator-5b745b69d9-hlhkz\" (UID: \"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.003884 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.025758 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.044147 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.064775 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.085209 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.105222 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.125022 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.145268 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.165259 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.184433 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.206775 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.225493 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.245294 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.265377 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.286187 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.305604 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.325295 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.346061 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.366126 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.375291 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f-metrics-tls\") pod \"ingress-operator-5b745b69d9-hlhkz\" (UID: \"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.385252 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.421083 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.424583 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f-trusted-ca\") pod \"ingress-operator-5b745b69d9-hlhkz\" (UID: \"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.426358 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.446722 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.466600 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.486329 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.505765 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.525719 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.545491 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.567459 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.586521 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.605858 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.625870 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.645769 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.665719 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.673051 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/57313bf3-1361-49f7-9a66-922b42ea36e7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g582m\" (UID: \"57313bf3-1361-49f7-9a66-922b42ea36e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.686571 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.705802 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.727334 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.745731 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.766081 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.785873 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.802767 4775 request.go:700] Waited for 1.010015565s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0 Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.805445 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.825174 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.845176 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.865104 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.885756 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.906197 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.925282 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 19:35:59 crc kubenswrapper[4775]: I1125 19:35:59.985820 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.004568 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.025022 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.047770 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.066142 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.085737 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.118204 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.126457 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.146136 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.165884 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.187086 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.207019 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.226551 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.246889 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.266382 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.285135 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.333274 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4rph\" (UniqueName: \"kubernetes.io/projected/7ee4a869-0151-49bd-bde4-34be52d97b8d-kube-api-access-x4rph\") pod \"openshift-config-operator-7777fb866f-l975q\" (UID: \"7ee4a869-0151-49bd-bde4-34be52d97b8d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.372447 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wzpr\" (UniqueName: \"kubernetes.io/projected/0a8caaab-1fce-46d6-8d6d-316903e159de-kube-api-access-9wzpr\") pod \"etcd-operator-b45778765-2n2wg\" (UID: \"0a8caaab-1fce-46d6-8d6d-316903e159de\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.385391 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lvd6\" (UniqueName: \"kubernetes.io/projected/e865c9de-8fd2-4b09-854c-0426a35d3290-kube-api-access-4lvd6\") pod \"machine-api-operator-5694c8668f-q5ml6\" (UID: \"e865c9de-8fd2-4b09-854c-0426a35d3290\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.397699 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.417571 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrsm9\" (UniqueName: \"kubernetes.io/projected/25947be7-09e9-475c-a477-90b964a3c16e-kube-api-access-wrsm9\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5d9l\" (UID: \"25947be7-09e9-475c-a477-90b964a3c16e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.420463 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq4mf\" (UniqueName: \"kubernetes.io/projected/ec7a3c48-29be-4d48-b897-1b84a51e1583-kube-api-access-lq4mf\") pod \"apiserver-7bbb656c7d-4swd2\" (UID: \"ec7a3c48-29be-4d48-b897-1b84a51e1583\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.425038 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.430579 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq4g9\" (UniqueName: \"kubernetes.io/projected/4c338a67-e4f2-49d1-a75b-1db89500dfd1-kube-api-access-xq4g9\") pod \"console-operator-58897d9998-pzkwp\" (UID: \"4c338a67-e4f2-49d1-a75b-1db89500dfd1\") " pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.445226 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.450226 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.466016 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.504927 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.510370 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tclqb\" (UniqueName: \"kubernetes.io/projected/0d24c230-e34e-4509-bba0-86d680714e25-kube-api-access-tclqb\") pod \"controller-manager-879f6c89f-pdqsx\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.525152 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.528459 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.538722 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.573745 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvkzz\" (UniqueName: \"kubernetes.io/projected/c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6-kube-api-access-jvkzz\") pod \"dns-operator-744455d44c-xnxgj\" (UID: \"c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6\") " pod="openshift-dns-operator/dns-operator-744455d44c-xnxgj" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.590182 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.591129 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwm6g\" (UniqueName: \"kubernetes.io/projected/12ce8247-daa8-42ce-90f4-b39317ca8583-kube-api-access-xwm6g\") pod \"cluster-samples-operator-665b6dd947-ws2gh\" (UID: \"12ce8247-daa8-42ce-90f4-b39317ca8583\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.603161 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8llt\" (UniqueName: \"kubernetes.io/projected/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-kube-api-access-g8llt\") pod \"console-f9d7485db-2c2hp\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.605883 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.614268 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.624397 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.638217 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.644820 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.656669 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q5ml6"] Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.685611 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98afa2eb-287e-4c9f-98d4-9b21849b04a4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dctvs\" (UID: \"98afa2eb-287e-4c9f-98d4-9b21849b04a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.709348 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l975q"] Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.722732 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltsxt\" (UniqueName: \"kubernetes.io/projected/6865cd6d-f340-4084-9efe-388f7744d93a-kube-api-access-ltsxt\") pod \"route-controller-manager-6576b87f9c-bw9d5\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.728209 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47lz7\" (UniqueName: \"kubernetes.io/projected/990bfa85-5063-451b-a3c1-13a918a2069d-kube-api-access-47lz7\") pod \"authentication-operator-69f744f599-8ldr6\" (UID: \"990bfa85-5063-451b-a3c1-13a918a2069d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.742174 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6r84\" (UniqueName: \"kubernetes.io/projected/145a12a6-6592-4f46-9b71-4db14ccb3faa-kube-api-access-j6r84\") pod \"openshift-apiserver-operator-796bbdcf4f-gptl7\" (UID: \"145a12a6-6592-4f46-9b71-4db14ccb3faa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.744742 4775 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.764844 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.767799 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xnxgj" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.768688 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.775740 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.783555 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.786771 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.806979 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.807345 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.817488 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.824836 4775 request.go:700] Waited for 1.883058975s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.830844 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.850237 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.862890 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2"] Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.862933 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l"] Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.863252 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.865476 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.884523 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 19:36:00 crc kubenswrapper[4775]: W1125 19:36:00.885644 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec7a3c48_29be_4d48_b897_1b84a51e1583.slice/crio-6cbe670761b32b53d7686f8af951ad9dec8b4c31513ea5566575ef33a608d963 WatchSource:0}: Error finding container 6cbe670761b32b53d7686f8af951ad9dec8b4c31513ea5566575ef33a608d963: Status 404 returned error can't find the container with id 6cbe670761b32b53d7686f8af951ad9dec8b4c31513ea5566575ef33a608d963 Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.904628 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.922602 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" event={"ID":"ec7a3c48-29be-4d48-b897-1b84a51e1583","Type":"ContainerStarted","Data":"6cbe670761b32b53d7686f8af951ad9dec8b4c31513ea5566575ef33a608d963"} Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.924308 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" event={"ID":"25947be7-09e9-475c-a477-90b964a3c16e","Type":"ContainerStarted","Data":"57d4d5894dd9bfcc3e2ae6a786707e97c234f78fa401fe2078f0f98d4f30e98a"} Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.929534 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.941500 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" event={"ID":"e865c9de-8fd2-4b09-854c-0426a35d3290","Type":"ContainerStarted","Data":"fc028d28a1958ef21caa8d3c5fe69a7cfc3025d393593610826a795d00bea4d0"} Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.941583 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" event={"ID":"e865c9de-8fd2-4b09-854c-0426a35d3290","Type":"ContainerStarted","Data":"89aaba7272c750bdc9cbb7416f18dfadb4ef5a4b3ca4d8a6c1113093e67d5121"} Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.942902 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" event={"ID":"7ee4a869-0151-49bd-bde4-34be52d97b8d","Type":"ContainerStarted","Data":"f982a0cae39c886e1c395d9ba6ac55023c9a7d58725faf589d833704ee4a5ce6"} Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.944581 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.952680 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2n2wg"] Nov 25 19:36:00 crc kubenswrapper[4775]: I1125 19:36:00.990226 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdqf6\" (UniqueName: \"kubernetes.io/projected/061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f-kube-api-access-rdqf6\") pod \"ingress-operator-5b745b69d9-hlhkz\" (UID: \"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.015686 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bh2t\" (UniqueName: \"kubernetes.io/projected/57313bf3-1361-49f7-9a66-922b42ea36e7-kube-api-access-8bh2t\") pod \"control-plane-machine-set-operator-78cbb6b69f-g582m\" (UID: \"57313bf3-1361-49f7-9a66-922b42ea36e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.024929 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hlhkz\" (UID: \"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128011 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-encryption-config\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128077 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128110 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/79cf629d-9f55-42f4-b5fa-58532bc6d191-stats-auth\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128147 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa227fc1-d306-45c7-908a-b1e39bd2971d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlnhk\" (UID: \"fa227fc1-d306-45c7-908a-b1e39bd2971d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128173 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4c88511-de83-4eb3-8e7e-b97271361717-audit-dir\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128220 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128248 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwrpw\" (UniqueName: \"kubernetes.io/projected/66879c07-13ef-4be1-b27b-d5d68d4d5b67-kube-api-access-vwrpw\") pod \"migrator-59844c95c7-pbchx\" (UID: \"66879c07-13ef-4be1-b27b-d5d68d4d5b67\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pbchx" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128299 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d8d4329-a2ce-4a85-a1ed-059baf355aa7-proxy-tls\") pod \"machine-config-controller-84d6567774-mh7qn\" (UID: \"6d8d4329-a2ce-4a85-a1ed-059baf355aa7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128324 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzrhq\" (UniqueName: \"kubernetes.io/projected/6d8d4329-a2ce-4a85-a1ed-059baf355aa7-kube-api-access-pzrhq\") pod \"machine-config-controller-84d6567774-mh7qn\" (UID: \"6d8d4329-a2ce-4a85-a1ed-059baf355aa7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128348 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-audit-policies\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128395 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac223290-b447-4d88-ba79-bc30253d3c27-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8sh7l\" (UID: \"ac223290-b447-4d88-ba79-bc30253d3c27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128419 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf192fad-a167-4814-a144-d353f121e26a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fsh6m\" (UID: \"cf192fad-a167-4814-a144-d353f121e26a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128476 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-registry-tls\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128505 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128590 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5e32e34-55d7-4513-a4d4-192be425e29f-proxy-tls\") pod \"machine-config-operator-74547568cd-h9fsn\" (UID: \"e5e32e34-55d7-4513-a4d4-192be425e29f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128677 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4zvf\" (UniqueName: \"kubernetes.io/projected/7959f454-8db6-4c44-9d44-9b3b2862935f-kube-api-access-t4zvf\") pod \"downloads-7954f5f757-7h68s\" (UID: \"7959f454-8db6-4c44-9d44-9b3b2862935f\") " pod="openshift-console/downloads-7954f5f757-7h68s" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128707 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-trusted-ca\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128824 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-audit\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128841 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac223290-b447-4d88-ba79-bc30253d3c27-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8sh7l\" (UID: \"ac223290-b447-4d88-ba79-bc30253d3c27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128878 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e890c8a-b312-4d4a-9a86-98d9aa75a3c0-config\") pod \"kube-controller-manager-operator-78b949d7b-qp4l6\" (UID: \"8e890c8a-b312-4d4a-9a86-98d9aa75a3c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128910 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-node-pullsecrets\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.128928 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129001 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee72ebe-1d37-4620-ab61-9f1a90a346c2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfldr\" (UID: \"cee72ebe-1d37-4620-ab61-9f1a90a346c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129080 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6d8d4329-a2ce-4a85-a1ed-059baf355aa7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mh7qn\" (UID: \"6d8d4329-a2ce-4a85-a1ed-059baf355aa7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129118 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129154 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-796pv\" (UniqueName: \"kubernetes.io/projected/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-kube-api-access-796pv\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129188 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-registry-certificates\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129204 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-etcd-serving-ca\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129218 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e890c8a-b312-4d4a-9a86-98d9aa75a3c0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qp4l6\" (UID: \"8e890c8a-b312-4d4a-9a86-98d9aa75a3c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129279 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpxvj\" (UniqueName: \"kubernetes.io/projected/79cf629d-9f55-42f4-b5fa-58532bc6d191-kube-api-access-rpxvj\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129297 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vq7w\" (UniqueName: \"kubernetes.io/projected/ac223290-b447-4d88-ba79-bc30253d3c27-kube-api-access-7vq7w\") pod \"cluster-image-registry-operator-dc59b4c8b-8sh7l\" (UID: \"ac223290-b447-4d88-ba79-bc30253d3c27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129319 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129370 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa227fc1-d306-45c7-908a-b1e39bd2971d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlnhk\" (UID: \"fa227fc1-d306-45c7-908a-b1e39bd2971d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129387 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa227fc1-d306-45c7-908a-b1e39bd2971d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlnhk\" (UID: \"fa227fc1-d306-45c7-908a-b1e39bd2971d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129429 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf192fad-a167-4814-a144-d353f121e26a-srv-cert\") pod \"olm-operator-6b444d44fb-fsh6m\" (UID: \"cf192fad-a167-4814-a144-d353f121e26a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129450 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtjc7\" (UniqueName: \"kubernetes.io/projected/cf192fad-a167-4814-a144-d353f121e26a-kube-api-access-wtjc7\") pod \"olm-operator-6b444d44fb-fsh6m\" (UID: \"cf192fad-a167-4814-a144-d353f121e26a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129470 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-image-import-ca\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.129535 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-etcd-client\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130140 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ps24\" (UniqueName: \"kubernetes.io/projected/e5e32e34-55d7-4513-a4d4-192be425e29f-kube-api-access-4ps24\") pod \"machine-config-operator-74547568cd-h9fsn\" (UID: \"e5e32e34-55d7-4513-a4d4-192be425e29f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130201 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac223290-b447-4d88-ba79-bc30253d3c27-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8sh7l\" (UID: \"ac223290-b447-4d88-ba79-bc30253d3c27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130241 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130259 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x66n8\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-kube-api-access-x66n8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130296 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jft2j\" (UniqueName: \"kubernetes.io/projected/d8d8d255-df7c-41c3-a1ac-e8ce91afcc56-kube-api-access-jft2j\") pod \"machine-approver-56656f9798-kncmq\" (UID: \"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130330 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cee72ebe-1d37-4620-ab61-9f1a90a346c2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfldr\" (UID: \"cee72ebe-1d37-4620-ab61-9f1a90a346c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130552 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-audit-dir\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130592 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130609 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-bound-sa-token\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130623 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/79cf629d-9f55-42f4-b5fa-58532bc6d191-default-certificate\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130840 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130899 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d8d255-df7c-41c3-a1ac-e8ce91afcc56-config\") pod \"machine-approver-56656f9798-kncmq\" (UID: \"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130925 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5e32e34-55d7-4513-a4d4-192be425e29f-images\") pod \"machine-config-operator-74547568cd-h9fsn\" (UID: \"e5e32e34-55d7-4513-a4d4-192be425e29f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.130999 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-serving-cert\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.131023 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phtjb\" (UniqueName: \"kubernetes.io/projected/c4c88511-de83-4eb3-8e7e-b97271361717-kube-api-access-phtjb\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.131083 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8d8d255-df7c-41c3-a1ac-e8ce91afcc56-auth-proxy-config\") pod \"machine-approver-56656f9798-kncmq\" (UID: \"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.131169 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.131213 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-config\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.131300 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.131332 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79cf629d-9f55-42f4-b5fa-58532bc6d191-service-ca-bundle\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.132066 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8d8d255-df7c-41c3-a1ac-e8ce91afcc56-machine-approver-tls\") pod \"machine-approver-56656f9798-kncmq\" (UID: \"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.132103 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.132190 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.132236 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.132302 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5e32e34-55d7-4513-a4d4-192be425e29f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-h9fsn\" (UID: \"e5e32e34-55d7-4513-a4d4-192be425e29f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.132400 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x7g2\" (UniqueName: \"kubernetes.io/projected/cee72ebe-1d37-4620-ab61-9f1a90a346c2-kube-api-access-2x7g2\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfldr\" (UID: \"cee72ebe-1d37-4620-ab61-9f1a90a346c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.132475 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79cf629d-9f55-42f4-b5fa-58532bc6d191-metrics-certs\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.132504 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e890c8a-b312-4d4a-9a86-98d9aa75a3c0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qp4l6\" (UID: \"8e890c8a-b312-4d4a-9a86-98d9aa75a3c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.132801 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: E1125 19:36:01.134838 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:01.634816372 +0000 UTC m=+143.551178778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.144994 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pzkwp"] Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.164611 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.180145 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.227005 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2c2hp"] Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.234409 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:01 crc kubenswrapper[4775]: E1125 19:36:01.235004 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:01.734634129 +0000 UTC m=+143.650996505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235086 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-audit\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235143 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac223290-b447-4d88-ba79-bc30253d3c27-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8sh7l\" (UID: \"ac223290-b447-4d88-ba79-bc30253d3c27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235176 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e890c8a-b312-4d4a-9a86-98d9aa75a3c0-config\") pod \"kube-controller-manager-operator-78b949d7b-qp4l6\" (UID: \"8e890c8a-b312-4d4a-9a86-98d9aa75a3c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235210 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63343f9f-b1cf-43a2-9879-34ba51820dae-metrics-tls\") pod \"dns-default-sl6bq\" (UID: \"63343f9f-b1cf-43a2-9879-34ba51820dae\") " pod="openshift-dns/dns-default-sl6bq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235239 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-trusted-ca\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235264 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsnwv\" (UniqueName: \"kubernetes.io/projected/00c6f5ae-339a-4349-841d-cfdc229a16b6-kube-api-access-fsnwv\") pod \"machine-config-server-ts8lq\" (UID: \"00c6f5ae-339a-4349-841d-cfdc229a16b6\") " pod="openshift-machine-config-operator/machine-config-server-ts8lq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235295 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235319 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-node-pullsecrets\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235346 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhfzk\" (UniqueName: \"kubernetes.io/projected/18d7bda6-3eca-44ff-8ec8-95b62b889e89-kube-api-access-fhfzk\") pod \"service-ca-9c57cc56f-wzwk5\" (UID: \"18d7bda6-3eca-44ff-8ec8-95b62b889e89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235384 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-registration-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235431 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54dca2d8-8976-4d24-b97a-a9e867d0d74b-secret-volume\") pod \"collect-profiles-29401650-h6dnw\" (UID: \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235457 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/243f4849-ac50-4876-a24e-bbc936a16cf4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mzblf\" (UID: \"243f4849-ac50-4876-a24e-bbc936a16cf4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mzblf" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235484 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7e8cafb9-f419-4958-95f0-3e9ffd9031a5-tmpfs\") pod \"packageserver-d55dfcdfc-m8wds\" (UID: \"7e8cafb9-f419-4958-95f0-3e9ffd9031a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235513 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee72ebe-1d37-4620-ab61-9f1a90a346c2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfldr\" (UID: \"cee72ebe-1d37-4620-ab61-9f1a90a346c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235552 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235583 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6d8d4329-a2ce-4a85-a1ed-059baf355aa7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mh7qn\" (UID: \"6d8d4329-a2ce-4a85-a1ed-059baf355aa7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235613 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-796pv\" (UniqueName: \"kubernetes.io/projected/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-kube-api-access-796pv\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235634 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-registry-certificates\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235674 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-etcd-serving-ca\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235704 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpxvj\" (UniqueName: \"kubernetes.io/projected/79cf629d-9f55-42f4-b5fa-58532bc6d191-kube-api-access-rpxvj\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235726 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vq7w\" (UniqueName: \"kubernetes.io/projected/ac223290-b447-4d88-ba79-bc30253d3c27-kube-api-access-7vq7w\") pod \"cluster-image-registry-operator-dc59b4c8b-8sh7l\" (UID: \"ac223290-b447-4d88-ba79-bc30253d3c27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235748 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e890c8a-b312-4d4a-9a86-98d9aa75a3c0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qp4l6\" (UID: \"8e890c8a-b312-4d4a-9a86-98d9aa75a3c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235777 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235811 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3566ef9c-3d80-480e-b069-1ff60753877f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5h4vj\" (UID: \"3566ef9c-3d80-480e-b069-1ff60753877f\") " pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235844 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa227fc1-d306-45c7-908a-b1e39bd2971d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlnhk\" (UID: \"fa227fc1-d306-45c7-908a-b1e39bd2971d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235866 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pknld\" (UniqueName: \"kubernetes.io/projected/03363846-dbbf-41cd-9ecc-dd8dd93906c3-kube-api-access-pknld\") pod \"catalog-operator-68c6474976-f7vp7\" (UID: \"03363846-dbbf-41cd-9ecc-dd8dd93906c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235901 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf192fad-a167-4814-a144-d353f121e26a-srv-cert\") pod \"olm-operator-6b444d44fb-fsh6m\" (UID: \"cf192fad-a167-4814-a144-d353f121e26a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235921 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa227fc1-d306-45c7-908a-b1e39bd2971d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlnhk\" (UID: \"fa227fc1-d306-45c7-908a-b1e39bd2971d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.235941 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-image-import-ca\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.236970 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee72ebe-1d37-4620-ab61-9f1a90a346c2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfldr\" (UID: \"cee72ebe-1d37-4620-ab61-9f1a90a346c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.237054 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-node-pullsecrets\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.237141 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6d8d4329-a2ce-4a85-a1ed-059baf355aa7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mh7qn\" (UID: \"6d8d4329-a2ce-4a85-a1ed-059baf355aa7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.237711 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.238588 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e890c8a-b312-4d4a-9a86-98d9aa75a3c0-config\") pod \"kube-controller-manager-operator-78b949d7b-qp4l6\" (UID: \"8e890c8a-b312-4d4a-9a86-98d9aa75a3c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.238600 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-registry-certificates\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: E1125 19:36:01.238687 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:01.738668102 +0000 UTC m=+143.655030658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239214 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-trusted-ca\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239230 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtjc7\" (UniqueName: \"kubernetes.io/projected/cf192fad-a167-4814-a144-d353f121e26a-kube-api-access-wtjc7\") pod \"olm-operator-6b444d44fb-fsh6m\" (UID: \"cf192fad-a167-4814-a144-d353f121e26a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239295 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-etcd-client\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239323 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ps24\" (UniqueName: \"kubernetes.io/projected/e5e32e34-55d7-4513-a4d4-192be425e29f-kube-api-access-4ps24\") pod \"machine-config-operator-74547568cd-h9fsn\" (UID: \"e5e32e34-55d7-4513-a4d4-192be425e29f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239349 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54dca2d8-8976-4d24-b97a-a9e867d0d74b-config-volume\") pod \"collect-profiles-29401650-h6dnw\" (UID: \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239379 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2pl4\" (UniqueName: \"kubernetes.io/projected/54dca2d8-8976-4d24-b97a-a9e867d0d74b-kube-api-access-m2pl4\") pod \"collect-profiles-29401650-h6dnw\" (UID: \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239407 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac223290-b447-4d88-ba79-bc30253d3c27-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8sh7l\" (UID: \"ac223290-b447-4d88-ba79-bc30253d3c27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239494 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74e16d71-813f-439f-a733-ce5d9ab3318c-serving-cert\") pod \"service-ca-operator-777779d784-spkbv\" (UID: \"74e16d71-813f-439f-a733-ce5d9ab3318c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239547 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x66n8\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-kube-api-access-x66n8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239592 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239623 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e16d71-813f-439f-a733-ce5d9ab3318c-config\") pod \"service-ca-operator-777779d784-spkbv\" (UID: \"74e16d71-813f-439f-a733-ce5d9ab3318c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239699 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jft2j\" (UniqueName: \"kubernetes.io/projected/d8d8d255-df7c-41c3-a1ac-e8ce91afcc56-kube-api-access-jft2j\") pod \"machine-approver-56656f9798-kncmq\" (UID: \"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239857 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cee72ebe-1d37-4620-ab61-9f1a90a346c2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfldr\" (UID: \"cee72ebe-1d37-4620-ab61-9f1a90a346c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239696 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-audit\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.239958 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e8cafb9-f419-4958-95f0-3e9ffd9031a5-webhook-cert\") pod \"packageserver-d55dfcdfc-m8wds\" (UID: \"7e8cafb9-f419-4958-95f0-3e9ffd9031a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240428 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a318a01-2098-4719-839b-d3dee730659e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ph5jb\" (UID: \"6a318a01-2098-4719-839b-d3dee730659e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240464 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-audit-dir\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240489 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-bound-sa-token\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240513 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/79cf629d-9f55-42f4-b5fa-58532bc6d191-default-certificate\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240538 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3566ef9c-3d80-480e-b069-1ff60753877f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5h4vj\" (UID: \"3566ef9c-3d80-480e-b069-1ff60753877f\") " pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240564 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/00c6f5ae-339a-4349-841d-cfdc229a16b6-node-bootstrap-token\") pod \"machine-config-server-ts8lq\" (UID: \"00c6f5ae-339a-4349-841d-cfdc229a16b6\") " pod="openshift-machine-config-operator/machine-config-server-ts8lq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240588 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240616 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240641 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5e32e34-55d7-4513-a4d4-192be425e29f-images\") pod \"machine-config-operator-74547568cd-h9fsn\" (UID: \"e5e32e34-55d7-4513-a4d4-192be425e29f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240686 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d8d255-df7c-41c3-a1ac-e8ce91afcc56-config\") pod \"machine-approver-56656f9798-kncmq\" (UID: \"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240739 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-serving-cert\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240762 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh9dp\" (UniqueName: \"kubernetes.io/projected/74e16d71-813f-439f-a733-ce5d9ab3318c-kube-api-access-qh9dp\") pod \"service-ca-operator-777779d784-spkbv\" (UID: \"74e16d71-813f-439f-a733-ce5d9ab3318c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240787 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-mountpoint-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240808 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nvmh\" (UniqueName: \"kubernetes.io/projected/6a318a01-2098-4719-839b-d3dee730659e-kube-api-access-4nvmh\") pod \"package-server-manager-789f6589d5-ph5jb\" (UID: \"6a318a01-2098-4719-839b-d3dee730659e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240835 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phtjb\" (UniqueName: \"kubernetes.io/projected/c4c88511-de83-4eb3-8e7e-b97271361717-kube-api-access-phtjb\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240918 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8d8d255-df7c-41c3-a1ac-e8ce91afcc56-auth-proxy-config\") pod \"machine-approver-56656f9798-kncmq\" (UID: \"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240944 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/03363846-dbbf-41cd-9ecc-dd8dd93906c3-srv-cert\") pod \"catalog-operator-68c6474976-f7vp7\" (UID: \"03363846-dbbf-41cd-9ecc-dd8dd93906c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240976 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241001 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-config\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241037 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241066 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09c8c636-6cf7-44e5-b82c-e34e8385e895-cert\") pod \"ingress-canary-m88v8\" (UID: \"09c8c636-6cf7-44e5-b82c-e34e8385e895\") " pod="openshift-ingress-canary/ingress-canary-m88v8" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241092 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79cf629d-9f55-42f4-b5fa-58532bc6d191-service-ca-bundle\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241117 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w79bk\" (UniqueName: \"kubernetes.io/projected/63343f9f-b1cf-43a2-9879-34ba51820dae-kube-api-access-w79bk\") pod \"dns-default-sl6bq\" (UID: \"63343f9f-b1cf-43a2-9879-34ba51820dae\") " pod="openshift-dns/dns-default-sl6bq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241146 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8d8d255-df7c-41c3-a1ac-e8ce91afcc56-machine-approver-tls\") pod \"machine-approver-56656f9798-kncmq\" (UID: \"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241180 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241208 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241233 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241259 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-plugins-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241306 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5e32e34-55d7-4513-a4d4-192be425e29f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-h9fsn\" (UID: \"e5e32e34-55d7-4513-a4d4-192be425e29f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241336 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6w2z\" (UniqueName: \"kubernetes.io/projected/7e8cafb9-f419-4958-95f0-3e9ffd9031a5-kube-api-access-t6w2z\") pod \"packageserver-d55dfcdfc-m8wds\" (UID: \"7e8cafb9-f419-4958-95f0-3e9ffd9031a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241364 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x7g2\" (UniqueName: \"kubernetes.io/projected/cee72ebe-1d37-4620-ab61-9f1a90a346c2-kube-api-access-2x7g2\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfldr\" (UID: \"cee72ebe-1d37-4620-ab61-9f1a90a346c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241394 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79cf629d-9f55-42f4-b5fa-58532bc6d191-metrics-certs\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.240199 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241415 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e890c8a-b312-4d4a-9a86-98d9aa75a3c0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qp4l6\" (UID: \"8e890c8a-b312-4d4a-9a86-98d9aa75a3c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241437 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-csi-data-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241466 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qfcc\" (UniqueName: \"kubernetes.io/projected/3566ef9c-3d80-480e-b069-1ff60753877f-kube-api-access-8qfcc\") pod \"marketplace-operator-79b997595-5h4vj\" (UID: \"3566ef9c-3d80-480e-b069-1ff60753877f\") " pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241477 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-audit-dir\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241505 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241532 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-socket-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241562 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/03363846-dbbf-41cd-9ecc-dd8dd93906c3-profile-collector-cert\") pod \"catalog-operator-68c6474976-f7vp7\" (UID: \"03363846-dbbf-41cd-9ecc-dd8dd93906c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241605 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/18d7bda6-3eca-44ff-8ec8-95b62b889e89-signing-cabundle\") pod \"service-ca-9c57cc56f-wzwk5\" (UID: \"18d7bda6-3eca-44ff-8ec8-95b62b889e89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241657 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jlhg\" (UniqueName: \"kubernetes.io/projected/243f4849-ac50-4876-a24e-bbc936a16cf4-kube-api-access-9jlhg\") pod \"multus-admission-controller-857f4d67dd-mzblf\" (UID: \"243f4849-ac50-4876-a24e-bbc936a16cf4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mzblf" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241686 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-encryption-config\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241712 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241736 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z7hk\" (UniqueName: \"kubernetes.io/projected/fd6511bf-ce8c-40d6-913e-b28add158dee-kube-api-access-7z7hk\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241770 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/79cf629d-9f55-42f4-b5fa-58532bc6d191-stats-auth\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241801 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/18d7bda6-3eca-44ff-8ec8-95b62b889e89-signing-key\") pod \"service-ca-9c57cc56f-wzwk5\" (UID: \"18d7bda6-3eca-44ff-8ec8-95b62b889e89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241825 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/00c6f5ae-339a-4349-841d-cfdc229a16b6-certs\") pod \"machine-config-server-ts8lq\" (UID: \"00c6f5ae-339a-4349-841d-cfdc229a16b6\") " pod="openshift-machine-config-operator/machine-config-server-ts8lq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241853 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4c88511-de83-4eb3-8e7e-b97271361717-audit-dir\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241878 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241902 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa227fc1-d306-45c7-908a-b1e39bd2971d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlnhk\" (UID: \"fa227fc1-d306-45c7-908a-b1e39bd2971d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241927 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwrpw\" (UniqueName: \"kubernetes.io/projected/66879c07-13ef-4be1-b27b-d5d68d4d5b67-kube-api-access-vwrpw\") pod \"migrator-59844c95c7-pbchx\" (UID: \"66879c07-13ef-4be1-b27b-d5d68d4d5b67\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pbchx" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241953 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s66n\" (UniqueName: \"kubernetes.io/projected/09c8c636-6cf7-44e5-b82c-e34e8385e895-kube-api-access-5s66n\") pod \"ingress-canary-m88v8\" (UID: \"09c8c636-6cf7-44e5-b82c-e34e8385e895\") " pod="openshift-ingress-canary/ingress-canary-m88v8" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.241978 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d8d4329-a2ce-4a85-a1ed-059baf355aa7-proxy-tls\") pod \"machine-config-controller-84d6567774-mh7qn\" (UID: \"6d8d4329-a2ce-4a85-a1ed-059baf355aa7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.242005 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzrhq\" (UniqueName: \"kubernetes.io/projected/6d8d4329-a2ce-4a85-a1ed-059baf355aa7-kube-api-access-pzrhq\") pod \"machine-config-controller-84d6567774-mh7qn\" (UID: \"6d8d4329-a2ce-4a85-a1ed-059baf355aa7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.242034 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-audit-policies\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.242064 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf192fad-a167-4814-a144-d353f121e26a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fsh6m\" (UID: \"cf192fad-a167-4814-a144-d353f121e26a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.242104 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-registry-tls\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.242128 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac223290-b447-4d88-ba79-bc30253d3c27-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8sh7l\" (UID: \"ac223290-b447-4d88-ba79-bc30253d3c27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.242167 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.242192 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63343f9f-b1cf-43a2-9879-34ba51820dae-config-volume\") pod \"dns-default-sl6bq\" (UID: \"63343f9f-b1cf-43a2-9879-34ba51820dae\") " pod="openshift-dns/dns-default-sl6bq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.242217 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4zvf\" (UniqueName: \"kubernetes.io/projected/7959f454-8db6-4c44-9d44-9b3b2862935f-kube-api-access-t4zvf\") pod \"downloads-7954f5f757-7h68s\" (UID: \"7959f454-8db6-4c44-9d44-9b3b2862935f\") " pod="openshift-console/downloads-7954f5f757-7h68s" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.242242 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5e32e34-55d7-4513-a4d4-192be425e29f-proxy-tls\") pod \"machine-config-operator-74547568cd-h9fsn\" (UID: \"e5e32e34-55d7-4513-a4d4-192be425e29f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.242267 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e8cafb9-f419-4958-95f0-3e9ffd9031a5-apiservice-cert\") pod \"packageserver-d55dfcdfc-m8wds\" (UID: \"7e8cafb9-f419-4958-95f0-3e9ffd9031a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.242880 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf192fad-a167-4814-a144-d353f121e26a-srv-cert\") pod \"olm-operator-6b444d44fb-fsh6m\" (UID: \"cf192fad-a167-4814-a144-d353f121e26a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.243193 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.243603 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d8d255-df7c-41c3-a1ac-e8ce91afcc56-config\") pod \"machine-approver-56656f9798-kncmq\" (UID: \"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.243672 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cee72ebe-1d37-4620-ab61-9f1a90a346c2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfldr\" (UID: \"cee72ebe-1d37-4620-ab61-9f1a90a346c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.244516 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-etcd-serving-ca\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.244849 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.244863 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/79cf629d-9f55-42f4-b5fa-58532bc6d191-default-certificate\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.244928 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.245164 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa227fc1-d306-45c7-908a-b1e39bd2971d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlnhk\" (UID: \"fa227fc1-d306-45c7-908a-b1e39bd2971d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.245492 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5e32e34-55d7-4513-a4d4-192be425e29f-images\") pod \"machine-config-operator-74547568cd-h9fsn\" (UID: \"e5e32e34-55d7-4513-a4d4-192be425e29f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.245771 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8d8d255-df7c-41c3-a1ac-e8ce91afcc56-auth-proxy-config\") pod \"machine-approver-56656f9798-kncmq\" (UID: \"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.245872 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.246088 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79cf629d-9f55-42f4-b5fa-58532bc6d191-service-ca-bundle\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.255563 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4c88511-de83-4eb3-8e7e-b97271361717-audit-dir\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.259533 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-audit-policies\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.260166 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-image-import-ca\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.270119 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8d8d255-df7c-41c3-a1ac-e8ce91afcc56-machine-approver-tls\") pod \"machine-approver-56656f9798-kncmq\" (UID: \"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.270915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.274631 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-registry-tls\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.276049 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf192fad-a167-4814-a144-d353f121e26a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fsh6m\" (UID: \"cf192fad-a167-4814-a144-d353f121e26a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.278386 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.287132 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.287445 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.287962 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa227fc1-d306-45c7-908a-b1e39bd2971d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlnhk\" (UID: \"fa227fc1-d306-45c7-908a-b1e39bd2971d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.288215 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.289261 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.299439 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d8d4329-a2ce-4a85-a1ed-059baf355aa7-proxy-tls\") pod \"machine-config-controller-84d6567774-mh7qn\" (UID: \"6d8d4329-a2ce-4a85-a1ed-059baf355aa7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.299773 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.300757 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5"] Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.325927 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa227fc1-d306-45c7-908a-b1e39bd2971d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlnhk\" (UID: \"fa227fc1-d306-45c7-908a-b1e39bd2971d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.335667 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xnxgj"] Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.337217 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs"] Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.345276 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.345472 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3566ef9c-3d80-480e-b069-1ff60753877f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5h4vj\" (UID: \"3566ef9c-3d80-480e-b069-1ff60753877f\") " pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.345492 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pknld\" (UniqueName: \"kubernetes.io/projected/03363846-dbbf-41cd-9ecc-dd8dd93906c3-kube-api-access-pknld\") pod \"catalog-operator-68c6474976-f7vp7\" (UID: \"03363846-dbbf-41cd-9ecc-dd8dd93906c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.345528 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54dca2d8-8976-4d24-b97a-a9e867d0d74b-config-volume\") pod \"collect-profiles-29401650-h6dnw\" (UID: \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.345546 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2pl4\" (UniqueName: \"kubernetes.io/projected/54dca2d8-8976-4d24-b97a-a9e867d0d74b-kube-api-access-m2pl4\") pod \"collect-profiles-29401650-h6dnw\" (UID: \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.345568 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74e16d71-813f-439f-a733-ce5d9ab3318c-serving-cert\") pod \"service-ca-operator-777779d784-spkbv\" (UID: \"74e16d71-813f-439f-a733-ce5d9ab3318c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.345591 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e16d71-813f-439f-a733-ce5d9ab3318c-config\") pod \"service-ca-operator-777779d784-spkbv\" (UID: \"74e16d71-813f-439f-a733-ce5d9ab3318c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.345614 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e8cafb9-f419-4958-95f0-3e9ffd9031a5-webhook-cert\") pod \"packageserver-d55dfcdfc-m8wds\" (UID: \"7e8cafb9-f419-4958-95f0-3e9ffd9031a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.345673 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a318a01-2098-4719-839b-d3dee730659e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ph5jb\" (UID: \"6a318a01-2098-4719-839b-d3dee730659e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.345700 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3566ef9c-3d80-480e-b069-1ff60753877f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5h4vj\" (UID: \"3566ef9c-3d80-480e-b069-1ff60753877f\") " pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.345715 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/00c6f5ae-339a-4349-841d-cfdc229a16b6-node-bootstrap-token\") pod \"machine-config-server-ts8lq\" (UID: \"00c6f5ae-339a-4349-841d-cfdc229a16b6\") " pod="openshift-machine-config-operator/machine-config-server-ts8lq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.345772 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh9dp\" (UniqueName: \"kubernetes.io/projected/74e16d71-813f-439f-a733-ce5d9ab3318c-kube-api-access-qh9dp\") pod \"service-ca-operator-777779d784-spkbv\" (UID: \"74e16d71-813f-439f-a733-ce5d9ab3318c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.345992 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-mountpoint-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.346021 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nvmh\" (UniqueName: \"kubernetes.io/projected/6a318a01-2098-4719-839b-d3dee730659e-kube-api-access-4nvmh\") pod \"package-server-manager-789f6589d5-ph5jb\" (UID: \"6a318a01-2098-4719-839b-d3dee730659e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.346065 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/03363846-dbbf-41cd-9ecc-dd8dd93906c3-srv-cert\") pod \"catalog-operator-68c6474976-f7vp7\" (UID: \"03363846-dbbf-41cd-9ecc-dd8dd93906c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.346185 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w79bk\" (UniqueName: \"kubernetes.io/projected/63343f9f-b1cf-43a2-9879-34ba51820dae-kube-api-access-w79bk\") pod \"dns-default-sl6bq\" (UID: \"63343f9f-b1cf-43a2-9879-34ba51820dae\") " pod="openshift-dns/dns-default-sl6bq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.346896 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-mountpoint-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.347908 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54dca2d8-8976-4d24-b97a-a9e867d0d74b-config-volume\") pod \"collect-profiles-29401650-h6dnw\" (UID: \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" Nov 25 19:36:01 crc kubenswrapper[4775]: E1125 19:36:01.348544 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:01.848513336 +0000 UTC m=+143.764875702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.348732 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3566ef9c-3d80-480e-b069-1ff60753877f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5h4vj\" (UID: \"3566ef9c-3d80-480e-b069-1ff60753877f\") " pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.350133 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09c8c636-6cf7-44e5-b82c-e34e8385e895-cert\") pod \"ingress-canary-m88v8\" (UID: \"09c8c636-6cf7-44e5-b82c-e34e8385e895\") " pod="openshift-ingress-canary/ingress-canary-m88v8" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.351330 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-plugins-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.351377 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6w2z\" (UniqueName: \"kubernetes.io/projected/7e8cafb9-f419-4958-95f0-3e9ffd9031a5-kube-api-access-t6w2z\") pod \"packageserver-d55dfcdfc-m8wds\" (UID: \"7e8cafb9-f419-4958-95f0-3e9ffd9031a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.351485 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-csi-data-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.351543 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qfcc\" (UniqueName: \"kubernetes.io/projected/3566ef9c-3d80-480e-b069-1ff60753877f-kube-api-access-8qfcc\") pod \"marketplace-operator-79b997595-5h4vj\" (UID: \"3566ef9c-3d80-480e-b069-1ff60753877f\") " pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.351846 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-csi-data-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.351934 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-plugins-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.351587 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-socket-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.352052 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-socket-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.352594 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3566ef9c-3d80-480e-b069-1ff60753877f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5h4vj\" (UID: \"3566ef9c-3d80-480e-b069-1ff60753877f\") " pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.352009 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/03363846-dbbf-41cd-9ecc-dd8dd93906c3-profile-collector-cert\") pod \"catalog-operator-68c6474976-f7vp7\" (UID: \"03363846-dbbf-41cd-9ecc-dd8dd93906c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.352678 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/18d7bda6-3eca-44ff-8ec8-95b62b889e89-signing-cabundle\") pod \"service-ca-9c57cc56f-wzwk5\" (UID: \"18d7bda6-3eca-44ff-8ec8-95b62b889e89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.352701 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jlhg\" (UniqueName: \"kubernetes.io/projected/243f4849-ac50-4876-a24e-bbc936a16cf4-kube-api-access-9jlhg\") pod \"multus-admission-controller-857f4d67dd-mzblf\" (UID: \"243f4849-ac50-4876-a24e-bbc936a16cf4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mzblf" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.352742 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z7hk\" (UniqueName: \"kubernetes.io/projected/fd6511bf-ce8c-40d6-913e-b28add158dee-kube-api-access-7z7hk\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.352818 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e8cafb9-f419-4958-95f0-3e9ffd9031a5-webhook-cert\") pod \"packageserver-d55dfcdfc-m8wds\" (UID: \"7e8cafb9-f419-4958-95f0-3e9ffd9031a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353474 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/18d7bda6-3eca-44ff-8ec8-95b62b889e89-signing-cabundle\") pod \"service-ca-9c57cc56f-wzwk5\" (UID: \"18d7bda6-3eca-44ff-8ec8-95b62b889e89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353546 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/18d7bda6-3eca-44ff-8ec8-95b62b889e89-signing-key\") pod \"service-ca-9c57cc56f-wzwk5\" (UID: \"18d7bda6-3eca-44ff-8ec8-95b62b889e89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353566 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/00c6f5ae-339a-4349-841d-cfdc229a16b6-certs\") pod \"machine-config-server-ts8lq\" (UID: \"00c6f5ae-339a-4349-841d-cfdc229a16b6\") " pod="openshift-machine-config-operator/machine-config-server-ts8lq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353586 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s66n\" (UniqueName: \"kubernetes.io/projected/09c8c636-6cf7-44e5-b82c-e34e8385e895-kube-api-access-5s66n\") pod \"ingress-canary-m88v8\" (UID: \"09c8c636-6cf7-44e5-b82c-e34e8385e895\") " pod="openshift-ingress-canary/ingress-canary-m88v8" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353676 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/03363846-dbbf-41cd-9ecc-dd8dd93906c3-srv-cert\") pod \"catalog-operator-68c6474976-f7vp7\" (UID: \"03363846-dbbf-41cd-9ecc-dd8dd93906c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353749 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63343f9f-b1cf-43a2-9879-34ba51820dae-config-volume\") pod \"dns-default-sl6bq\" (UID: \"63343f9f-b1cf-43a2-9879-34ba51820dae\") " pod="openshift-dns/dns-default-sl6bq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353777 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e8cafb9-f419-4958-95f0-3e9ffd9031a5-apiservice-cert\") pod \"packageserver-d55dfcdfc-m8wds\" (UID: \"7e8cafb9-f419-4958-95f0-3e9ffd9031a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353815 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63343f9f-b1cf-43a2-9879-34ba51820dae-metrics-tls\") pod \"dns-default-sl6bq\" (UID: \"63343f9f-b1cf-43a2-9879-34ba51820dae\") " pod="openshift-dns/dns-default-sl6bq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353845 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsnwv\" (UniqueName: \"kubernetes.io/projected/00c6f5ae-339a-4349-841d-cfdc229a16b6-kube-api-access-fsnwv\") pod \"machine-config-server-ts8lq\" (UID: \"00c6f5ae-339a-4349-841d-cfdc229a16b6\") " pod="openshift-machine-config-operator/machine-config-server-ts8lq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353875 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhfzk\" (UniqueName: \"kubernetes.io/projected/18d7bda6-3eca-44ff-8ec8-95b62b889e89-kube-api-access-fhfzk\") pod \"service-ca-9c57cc56f-wzwk5\" (UID: \"18d7bda6-3eca-44ff-8ec8-95b62b889e89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353902 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-registration-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353930 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54dca2d8-8976-4d24-b97a-a9e867d0d74b-secret-volume\") pod \"collect-profiles-29401650-h6dnw\" (UID: \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353951 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/243f4849-ac50-4876-a24e-bbc936a16cf4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mzblf\" (UID: \"243f4849-ac50-4876-a24e-bbc936a16cf4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mzblf" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.353973 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7e8cafb9-f419-4958-95f0-3e9ffd9031a5-tmpfs\") pod \"packageserver-d55dfcdfc-m8wds\" (UID: \"7e8cafb9-f419-4958-95f0-3e9ffd9031a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.355010 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7e8cafb9-f419-4958-95f0-3e9ffd9031a5-tmpfs\") pod \"packageserver-d55dfcdfc-m8wds\" (UID: \"7e8cafb9-f419-4958-95f0-3e9ffd9031a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.356566 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09c8c636-6cf7-44e5-b82c-e34e8385e895-cert\") pod \"ingress-canary-m88v8\" (UID: \"09c8c636-6cf7-44e5-b82c-e34e8385e895\") " pod="openshift-ingress-canary/ingress-canary-m88v8" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.357089 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd6511bf-ce8c-40d6-913e-b28add158dee-registration-dir\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.357898 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63343f9f-b1cf-43a2-9879-34ba51820dae-config-volume\") pod \"dns-default-sl6bq\" (UID: \"63343f9f-b1cf-43a2-9879-34ba51820dae\") " pod="openshift-dns/dns-default-sl6bq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.358078 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63343f9f-b1cf-43a2-9879-34ba51820dae-metrics-tls\") pod \"dns-default-sl6bq\" (UID: \"63343f9f-b1cf-43a2-9879-34ba51820dae\") " pod="openshift-dns/dns-default-sl6bq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.358779 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/03363846-dbbf-41cd-9ecc-dd8dd93906c3-profile-collector-cert\") pod \"catalog-operator-68c6474976-f7vp7\" (UID: \"03363846-dbbf-41cd-9ecc-dd8dd93906c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.360377 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/243f4849-ac50-4876-a24e-bbc936a16cf4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mzblf\" (UID: \"243f4849-ac50-4876-a24e-bbc936a16cf4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mzblf" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.361318 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/00c6f5ae-339a-4349-841d-cfdc229a16b6-node-bootstrap-token\") pod \"machine-config-server-ts8lq\" (UID: \"00c6f5ae-339a-4349-841d-cfdc229a16b6\") " pod="openshift-machine-config-operator/machine-config-server-ts8lq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.361516 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54dca2d8-8976-4d24-b97a-a9e867d0d74b-secret-volume\") pod \"collect-profiles-29401650-h6dnw\" (UID: \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.362104 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/00c6f5ae-339a-4349-841d-cfdc229a16b6-certs\") pod \"machine-config-server-ts8lq\" (UID: \"00c6f5ae-339a-4349-841d-cfdc229a16b6\") " pod="openshift-machine-config-operator/machine-config-server-ts8lq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.362353 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/18d7bda6-3eca-44ff-8ec8-95b62b889e89-signing-key\") pod \"service-ca-9c57cc56f-wzwk5\" (UID: \"18d7bda6-3eca-44ff-8ec8-95b62b889e89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.380944 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e890c8a-b312-4d4a-9a86-98d9aa75a3c0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qp4l6\" (UID: \"8e890c8a-b312-4d4a-9a86-98d9aa75a3c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.382623 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5e32e34-55d7-4513-a4d4-192be425e29f-proxy-tls\") pod \"machine-config-operator-74547568cd-h9fsn\" (UID: \"e5e32e34-55d7-4513-a4d4-192be425e29f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.382758 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e16d71-813f-439f-a733-ce5d9ab3318c-config\") pod \"service-ca-operator-777779d784-spkbv\" (UID: \"74e16d71-813f-439f-a733-ce5d9ab3318c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.382759 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5e32e34-55d7-4513-a4d4-192be425e29f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-h9fsn\" (UID: \"e5e32e34-55d7-4513-a4d4-192be425e29f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.383360 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74e16d71-813f-439f-a733-ce5d9ab3318c-serving-cert\") pod \"service-ca-operator-777779d784-spkbv\" (UID: \"74e16d71-813f-439f-a733-ce5d9ab3318c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.383610 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a318a01-2098-4719-839b-d3dee730659e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ph5jb\" (UID: \"6a318a01-2098-4719-839b-d3dee730659e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.383771 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79cf629d-9f55-42f4-b5fa-58532bc6d191-metrics-certs\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.384073 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-etcd-client\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.384382 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-encryption-config\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.384935 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-serving-cert\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.385833 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e8cafb9-f419-4958-95f0-3e9ffd9031a5-apiservice-cert\") pod \"packageserver-d55dfcdfc-m8wds\" (UID: \"7e8cafb9-f419-4958-95f0-3e9ffd9031a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.387504 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtjc7\" (UniqueName: \"kubernetes.io/projected/cf192fad-a167-4814-a144-d353f121e26a-kube-api-access-wtjc7\") pod \"olm-operator-6b444d44fb-fsh6m\" (UID: \"cf192fad-a167-4814-a144-d353f121e26a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.387988 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-796pv\" (UniqueName: \"kubernetes.io/projected/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-kube-api-access-796pv\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.388186 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vq7w\" (UniqueName: \"kubernetes.io/projected/ac223290-b447-4d88-ba79-bc30253d3c27-kube-api-access-7vq7w\") pod \"cluster-image-registry-operator-dc59b4c8b-8sh7l\" (UID: \"ac223290-b447-4d88-ba79-bc30253d3c27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.388815 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpxvj\" (UniqueName: \"kubernetes.io/projected/79cf629d-9f55-42f4-b5fa-58532bc6d191-kube-api-access-rpxvj\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.389277 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/79cf629d-9f55-42f4-b5fa-58532bc6d191-stats-auth\") pod \"router-default-5444994796-vxrkp\" (UID: \"79cf629d-9f55-42f4-b5fa-58532bc6d191\") " pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.392082 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.393901 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac223290-b447-4d88-ba79-bc30253d3c27-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8sh7l\" (UID: \"ac223290-b447-4d88-ba79-bc30253d3c27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.394217 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e890c8a-b312-4d4a-9a86-98d9aa75a3c0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qp4l6\" (UID: \"8e890c8a-b312-4d4a-9a86-98d9aa75a3c0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.396037 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac223290-b447-4d88-ba79-bc30253d3c27-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8sh7l\" (UID: \"ac223290-b447-4d88-ba79-bc30253d3c27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.400386 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7641661-a2a3-4eca-b5fd-892e7f60bcf4-config\") pod \"apiserver-76f77b778f-zzjj4\" (UID: \"e7641661-a2a3-4eca-b5fd-892e7f60bcf4\") " pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.405420 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ps24\" (UniqueName: \"kubernetes.io/projected/e5e32e34-55d7-4513-a4d4-192be425e29f-kube-api-access-4ps24\") pod \"machine-config-operator-74547568cd-h9fsn\" (UID: \"e5e32e34-55d7-4513-a4d4-192be425e29f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.416122 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz"] Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.429055 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac223290-b447-4d88-ba79-bc30253d3c27-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8sh7l\" (UID: \"ac223290-b447-4d88-ba79-bc30253d3c27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.429959 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.433109 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8ldr6"] Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.434333 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pdqsx"] Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.436681 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7"] Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.451007 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh"] Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.453155 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x66n8\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-kube-api-access-x66n8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.455295 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: E1125 19:36:01.455870 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:01.955854309 +0000 UTC m=+143.872216675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.460345 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jft2j\" (UniqueName: \"kubernetes.io/projected/d8d8d255-df7c-41c3-a1ac-e8ce91afcc56-kube-api-access-jft2j\") pod \"machine-approver-56656f9798-kncmq\" (UID: \"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.466431 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.479473 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-bound-sa-token\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.486932 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.494678 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.509325 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phtjb\" (UniqueName: \"kubernetes.io/projected/c4c88511-de83-4eb3-8e7e-b97271361717-kube-api-access-phtjb\") pod \"oauth-openshift-558db77b4-8dhhr\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.525027 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x7g2\" (UniqueName: \"kubernetes.io/projected/cee72ebe-1d37-4620-ab61-9f1a90a346c2-kube-api-access-2x7g2\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfldr\" (UID: \"cee72ebe-1d37-4620-ab61-9f1a90a346c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.546590 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4zvf\" (UniqueName: \"kubernetes.io/projected/7959f454-8db6-4c44-9d44-9b3b2862935f-kube-api-access-t4zvf\") pod \"downloads-7954f5f757-7h68s\" (UID: \"7959f454-8db6-4c44-9d44-9b3b2862935f\") " pod="openshift-console/downloads-7954f5f757-7h68s" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.557214 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:01 crc kubenswrapper[4775]: E1125 19:36:01.565961 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.065897359 +0000 UTC m=+143.982259735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.566040 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.569819 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: E1125 19:36:01.570337 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.070317746 +0000 UTC m=+143.986680112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.572635 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwrpw\" (UniqueName: \"kubernetes.io/projected/66879c07-13ef-4be1-b27b-d5d68d4d5b67-kube-api-access-vwrpw\") pod \"migrator-59844c95c7-pbchx\" (UID: \"66879c07-13ef-4be1-b27b-d5d68d4d5b67\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pbchx" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.592136 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzrhq\" (UniqueName: \"kubernetes.io/projected/6d8d4329-a2ce-4a85-a1ed-059baf355aa7-kube-api-access-pzrhq\") pod \"machine-config-controller-84d6567774-mh7qn\" (UID: \"6d8d4329-a2ce-4a85-a1ed-059baf355aa7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.619876 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nvmh\" (UniqueName: \"kubernetes.io/projected/6a318a01-2098-4719-839b-d3dee730659e-kube-api-access-4nvmh\") pod \"package-server-manager-789f6589d5-ph5jb\" (UID: \"6a318a01-2098-4719-839b-d3dee730659e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.640424 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pknld\" (UniqueName: \"kubernetes.io/projected/03363846-dbbf-41cd-9ecc-dd8dd93906c3-kube-api-access-pknld\") pod \"catalog-operator-68c6474976-f7vp7\" (UID: \"03363846-dbbf-41cd-9ecc-dd8dd93906c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.668468 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2pl4\" (UniqueName: \"kubernetes.io/projected/54dca2d8-8976-4d24-b97a-a9e867d0d74b-kube-api-access-m2pl4\") pod \"collect-profiles-29401650-h6dnw\" (UID: \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.670616 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:01 crc kubenswrapper[4775]: E1125 19:36:01.670973 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.170954592 +0000 UTC m=+144.087316958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.674945 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6"] Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.685079 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w79bk\" (UniqueName: \"kubernetes.io/projected/63343f9f-b1cf-43a2-9879-34ba51820dae-kube-api-access-w79bk\") pod \"dns-default-sl6bq\" (UID: \"63343f9f-b1cf-43a2-9879-34ba51820dae\") " pod="openshift-dns/dns-default-sl6bq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.692845 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.699904 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh9dp\" (UniqueName: \"kubernetes.io/projected/74e16d71-813f-439f-a733-ce5d9ab3318c-kube-api-access-qh9dp\") pod \"service-ca-operator-777779d784-spkbv\" (UID: \"74e16d71-813f-439f-a733-ce5d9ab3318c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.700073 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.712393 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m"] Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.714471 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.722375 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pbchx" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.729280 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6w2z\" (UniqueName: \"kubernetes.io/projected/7e8cafb9-f419-4958-95f0-3e9ffd9031a5-kube-api-access-t6w2z\") pod \"packageserver-d55dfcdfc-m8wds\" (UID: \"7e8cafb9-f419-4958-95f0-3e9ffd9031a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.737731 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.743715 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.749702 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qfcc\" (UniqueName: \"kubernetes.io/projected/3566ef9c-3d80-480e-b069-1ff60753877f-kube-api-access-8qfcc\") pod \"marketplace-operator-79b997595-5h4vj\" (UID: \"3566ef9c-3d80-480e-b069-1ff60753877f\") " pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.750906 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7h68s" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.766961 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jlhg\" (UniqueName: \"kubernetes.io/projected/243f4849-ac50-4876-a24e-bbc936a16cf4-kube-api-access-9jlhg\") pod \"multus-admission-controller-857f4d67dd-mzblf\" (UID: \"243f4849-ac50-4876-a24e-bbc936a16cf4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mzblf" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.771896 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: E1125 19:36:01.772443 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.272420648 +0000 UTC m=+144.188783034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.772868 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.793184 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z7hk\" (UniqueName: \"kubernetes.io/projected/fd6511bf-ce8c-40d6-913e-b28add158dee-kube-api-access-7z7hk\") pod \"csi-hostpathplugin-76xlm\" (UID: \"fd6511bf-ce8c-40d6-913e-b28add158dee\") " pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.799502 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsnwv\" (UniqueName: \"kubernetes.io/projected/00c6f5ae-339a-4349-841d-cfdc229a16b6-kube-api-access-fsnwv\") pod \"machine-config-server-ts8lq\" (UID: \"00c6f5ae-339a-4349-841d-cfdc229a16b6\") " pod="openshift-machine-config-operator/machine-config-server-ts8lq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.805406 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.822527 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.827452 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhfzk\" (UniqueName: \"kubernetes.io/projected/18d7bda6-3eca-44ff-8ec8-95b62b889e89-kube-api-access-fhfzk\") pod \"service-ca-9c57cc56f-wzwk5\" (UID: \"18d7bda6-3eca-44ff-8ec8-95b62b889e89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.834440 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.839581 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s66n\" (UniqueName: \"kubernetes.io/projected/09c8c636-6cf7-44e5-b82c-e34e8385e895-kube-api-access-5s66n\") pod \"ingress-canary-m88v8\" (UID: \"09c8c636-6cf7-44e5-b82c-e34e8385e895\") " pod="openshift-ingress-canary/ingress-canary-m88v8" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.842525 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.853590 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mzblf" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.860750 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.873441 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:01 crc kubenswrapper[4775]: E1125 19:36:01.873572 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.373537242 +0000 UTC m=+144.289899598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.874095 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:01 crc kubenswrapper[4775]: E1125 19:36:01.874490 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.374482245 +0000 UTC m=+144.290844611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.880087 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m"] Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.882314 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-76xlm" Nov 25 19:36:01 crc kubenswrapper[4775]: W1125 19:36:01.886621 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e890c8a_b312_4d4a_9a86_98d9aa75a3c0.slice/crio-b59998e3a69313d803140f33a02812623ed2f0f808f0ce58a5fe3ff986be0870 WatchSource:0}: Error finding container b59998e3a69313d803140f33a02812623ed2f0f808f0ce58a5fe3ff986be0870: Status 404 returned error can't find the container with id b59998e3a69313d803140f33a02812623ed2f0f808f0ce58a5fe3ff986be0870 Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.897515 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.908979 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.915021 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m88v8" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.922231 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ts8lq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.928413 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sl6bq" Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.949701 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" event={"ID":"6865cd6d-f340-4084-9efe-388f7744d93a","Type":"ContainerStarted","Data":"ca9a5dbc39c9a99509b889e607ed5118e026751b5f1389005fd5932a3cfaed82"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.950810 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xnxgj" event={"ID":"c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6","Type":"ContainerStarted","Data":"97985091f2d5f9a5be5b9ccacf22fad34c2393f73f79b19a49542ac3492e1923"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.952028 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" event={"ID":"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f","Type":"ContainerStarted","Data":"20e3ea4a13545144b836368354c3ba2046a58e629440222f71479aa67fa9c978"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.952984 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-pzkwp" event={"ID":"4c338a67-e4f2-49d1-a75b-1db89500dfd1","Type":"ContainerStarted","Data":"730fcdd5ce7405729e9bd9bd7b9982ae7a2da5532600c3dfefe2c1f3b16b9b0d"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.953877 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" event={"ID":"98afa2eb-287e-4c9f-98d4-9b21849b04a4","Type":"ContainerStarted","Data":"ac2c0edd950cbc075d91934b3cdf4d37015696da6b730f773d794aa86dd8bc04"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.955511 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" event={"ID":"0a8caaab-1fce-46d6-8d6d-316903e159de","Type":"ContainerStarted","Data":"c248dbb89a8904b284f85879003041e0671bd23aac5682d3cbb0f32689941b47"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.955546 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" event={"ID":"0a8caaab-1fce-46d6-8d6d-316903e159de","Type":"ContainerStarted","Data":"a4ecfff4f6ca18c00ece1c76c7efdeacb551e7fd47d9f3d271e8b0f299125aaf"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.958933 4775 generic.go:334] "Generic (PLEG): container finished" podID="ec7a3c48-29be-4d48-b897-1b84a51e1583" containerID="4567611a12f3f1ba7e577cd17e8b46ab66b73386406cae40d92753bea6fb7e0d" exitCode=0 Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.958991 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" event={"ID":"ec7a3c48-29be-4d48-b897-1b84a51e1583","Type":"ContainerDied","Data":"4567611a12f3f1ba7e577cd17e8b46ab66b73386406cae40d92753bea6fb7e0d"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.959876 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" event={"ID":"8e890c8a-b312-4d4a-9a86-98d9aa75a3c0","Type":"ContainerStarted","Data":"b59998e3a69313d803140f33a02812623ed2f0f808f0ce58a5fe3ff986be0870"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.960846 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" event={"ID":"990bfa85-5063-451b-a3c1-13a918a2069d","Type":"ContainerStarted","Data":"2c4b3659c0d58e567ad878720598a3d56b1cbc50c615b5d4eda189ecb748edc5"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.962116 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2c2hp" event={"ID":"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3","Type":"ContainerStarted","Data":"fcb56a6a22343724451749a88e8447fdc16f3a193694c2560d6936e989f82b0f"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.963475 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" event={"ID":"145a12a6-6592-4f46-9b71-4db14ccb3faa","Type":"ContainerStarted","Data":"f1a106a54e184ae174ff9ca8ff1e42b9e52c093df18a907f6194fbf35186d237"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.964472 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" event={"ID":"0d24c230-e34e-4509-bba0-86d680714e25","Type":"ContainerStarted","Data":"6c992a71789d1cd176b7bef9e890f516f0a1e33d4ca1a6c0067dc7279c7b72fb"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.965851 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vxrkp" event={"ID":"79cf629d-9f55-42f4-b5fa-58532bc6d191","Type":"ContainerStarted","Data":"8a0c222651237784148e0cc532f69ac0101dc0281a115bcaf4b46720fe1bb5c1"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.967692 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" event={"ID":"25947be7-09e9-475c-a477-90b964a3c16e","Type":"ContainerStarted","Data":"84d7112ac63cef3748f2143af7e44351a73905e2c70023215f5268a426a9b535"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.971762 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" event={"ID":"7ee4a869-0151-49bd-bde4-34be52d97b8d","Type":"ContainerStarted","Data":"5c9e014f0a14f8ea16e2d51d9da26d19b1221baf0481e9b20984e7fdb95b8d4b"} Nov 25 19:36:01 crc kubenswrapper[4775]: I1125 19:36:01.975443 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:01 crc kubenswrapper[4775]: E1125 19:36:01.975858 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.475835567 +0000 UTC m=+144.392197943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:02 crc kubenswrapper[4775]: W1125 19:36:02.033536 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57313bf3_1361_49f7_9a66_922b42ea36e7.slice/crio-cb5912416c5ef26ff7807cbb5c77556e170cd839c5fdc26b4347fe76049859b5 WatchSource:0}: Error finding container cb5912416c5ef26ff7807cbb5c77556e170cd839c5fdc26b4347fe76049859b5: Status 404 returned error can't find the container with id cb5912416c5ef26ff7807cbb5c77556e170cd839c5fdc26b4347fe76049859b5 Nov 25 19:36:02 crc kubenswrapper[4775]: W1125 19:36:02.043335 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf192fad_a167_4814_a144_d353f121e26a.slice/crio-a290596902b193eaf6766c5128f2c817fd48dc477dcf07d6f93c516a25a4d7e5 WatchSource:0}: Error finding container a290596902b193eaf6766c5128f2c817fd48dc477dcf07d6f93c516a25a4d7e5: Status 404 returned error can't find the container with id a290596902b193eaf6766c5128f2c817fd48dc477dcf07d6f93c516a25a4d7e5 Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.076850 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:02 crc kubenswrapper[4775]: E1125 19:36:02.077308 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.577296243 +0000 UTC m=+144.493658609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.177935 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:02 crc kubenswrapper[4775]: E1125 19:36:02.178349 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.678315403 +0000 UTC m=+144.594677769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.178769 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:02 crc kubenswrapper[4775]: E1125 19:36:02.179121 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.679108711 +0000 UTC m=+144.595471077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.280031 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:02 crc kubenswrapper[4775]: E1125 19:36:02.280536 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.780498244 +0000 UTC m=+144.696860610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.280912 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:02 crc kubenswrapper[4775]: E1125 19:36:02.281451 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.781436087 +0000 UTC m=+144.697798453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.296585 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk"] Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.382407 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:02 crc kubenswrapper[4775]: E1125 19:36:02.382576 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.88254452 +0000 UTC m=+144.798906886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.383325 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:02 crc kubenswrapper[4775]: E1125 19:36:02.383850 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.883831946 +0000 UTC m=+144.800194312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.461962 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8dhhr"] Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.484222 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:02 crc kubenswrapper[4775]: E1125 19:36:02.484821 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:02.984784494 +0000 UTC m=+144.901146860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.595078 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:02 crc kubenswrapper[4775]: E1125 19:36:02.595425 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:03.095407554 +0000 UTC m=+145.011769920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.697585 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:02 crc kubenswrapper[4775]: E1125 19:36:02.698565 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:03.198515979 +0000 UTC m=+145.114878345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:02 crc kubenswrapper[4775]: W1125 19:36:02.698933 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00c6f5ae_339a_4349_841d_cfdc229a16b6.slice/crio-bb36dda1a419e2b377d16e4a08a133b5b4fa8bd80a029e4f363439c4927d9c0d WatchSource:0}: Error finding container bb36dda1a419e2b377d16e4a08a133b5b4fa8bd80a029e4f363439c4927d9c0d: Status 404 returned error can't find the container with id bb36dda1a419e2b377d16e4a08a133b5b4fa8bd80a029e4f363439c4927d9c0d Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.717878 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:02 crc kubenswrapper[4775]: E1125 19:36:02.718383 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:03.218340621 +0000 UTC m=+145.134702987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.729129 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l"] Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.768458 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn"] Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.824843 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:02 crc kubenswrapper[4775]: E1125 19:36:02.825250 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:03.325229229 +0000 UTC m=+145.241591585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:02 crc kubenswrapper[4775]: I1125 19:36:02.926046 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:02 crc kubenswrapper[4775]: E1125 19:36:02.926900 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:03.426878241 +0000 UTC m=+145.343240607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.032360 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:03 crc kubenswrapper[4775]: E1125 19:36:03.032887 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:03.532855138 +0000 UTC m=+145.449217494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.034256 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:03 crc kubenswrapper[4775]: E1125 19:36:03.035327 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:03.535285073 +0000 UTC m=+145.451647439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.129837 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" event={"ID":"cf192fad-a167-4814-a144-d353f121e26a","Type":"ContainerStarted","Data":"a290596902b193eaf6766c5128f2c817fd48dc477dcf07d6f93c516a25a4d7e5"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.137145 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:03 crc kubenswrapper[4775]: E1125 19:36:03.137705 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:03.637685053 +0000 UTC m=+145.554047419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.140268 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" event={"ID":"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56","Type":"ContainerStarted","Data":"a6c5ae588ef450973b367caf643dca0c2f2f89bd615130ea6a48935b101d5a88"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.145676 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" event={"ID":"6865cd6d-f340-4084-9efe-388f7744d93a","Type":"ContainerStarted","Data":"6799fecd68c066f9f30931a4adeacb58c4e24644071e575dcd08a70e7188cf85"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.169864 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" event={"ID":"98afa2eb-287e-4c9f-98d4-9b21849b04a4","Type":"ContainerStarted","Data":"0c26d1a9fe71f37201fe5ad310efe8a8b3740e56f86f3e21119c0e51eb363660"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.171570 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh" event={"ID":"12ce8247-daa8-42ce-90f4-b39317ca8583","Type":"ContainerStarted","Data":"d0cc1cb403cb76862593daee0f05ffe807314f12850ad808e1bb6105429a7359"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.171635 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh" event={"ID":"12ce8247-daa8-42ce-90f4-b39317ca8583","Type":"ContainerStarted","Data":"c72538df812e6f3347d09052f1504a8da35f0dad4da3b04aa1b13a167ee647ab"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.174379 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" event={"ID":"e865c9de-8fd2-4b09-854c-0426a35d3290","Type":"ContainerStarted","Data":"6fa6fd02aa3ee05a289a884d010fd0a266d5a573a566a85f0fced8afb16e2b7c"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.229345 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn"] Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.239150 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:03 crc kubenswrapper[4775]: E1125 19:36:03.241314 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:03.741292665 +0000 UTC m=+145.657655031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.253088 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-76xlm"] Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.253221 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zzjj4"] Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.288580 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" event={"ID":"fa227fc1-d306-45c7-908a-b1e39bd2971d","Type":"ContainerStarted","Data":"a8aa30cdce51827e612c0eaf40acdcdafe41de0cc400879b4a737c56437f64e7"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.311587 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" event={"ID":"145a12a6-6592-4f46-9b71-4db14ccb3faa","Type":"ContainerStarted","Data":"324e0c6cbb685506d7ae682a1a381d75d39b27184716c5a6df3b536328241db5"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.332115 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2c2hp" event={"ID":"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3","Type":"ContainerStarted","Data":"cf54962a282d23b71f7663b6b0f1009b38191c874b97e535e74c06292ddf5ca5"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.342482 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:03 crc kubenswrapper[4775]: E1125 19:36:03.344796 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:03.844778681 +0000 UTC m=+145.761141047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.352016 4775 generic.go:334] "Generic (PLEG): container finished" podID="7ee4a869-0151-49bd-bde4-34be52d97b8d" containerID="5c9e014f0a14f8ea16e2d51d9da26d19b1221baf0481e9b20984e7fdb95b8d4b" exitCode=0 Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.352058 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" event={"ID":"7ee4a869-0151-49bd-bde4-34be52d97b8d","Type":"ContainerDied","Data":"5c9e014f0a14f8ea16e2d51d9da26d19b1221baf0481e9b20984e7fdb95b8d4b"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.358351 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5d9l" podStartSLOduration=124.358320241 podStartE2EDuration="2m4.358320241s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:03.328882288 +0000 UTC m=+145.245244664" watchObservedRunningTime="2025-11-25 19:36:03.358320241 +0000 UTC m=+145.274682607" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.389843 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xnxgj" event={"ID":"c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6","Type":"ContainerStarted","Data":"e7ef7518f93d472592a7c0979bf5d293ffca7c2ad95de36cad311a36068ad12e"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.392158 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vxrkp" event={"ID":"79cf629d-9f55-42f4-b5fa-58532bc6d191","Type":"ContainerStarted","Data":"6709a084459cbc65e5ecc23a3d7f3ec6e51e421ea58f6566529f8beed33c3aba"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.393976 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ts8lq" event={"ID":"00c6f5ae-339a-4349-841d-cfdc229a16b6","Type":"ContainerStarted","Data":"bb36dda1a419e2b377d16e4a08a133b5b4fa8bd80a029e4f363439c4927d9c0d"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.402347 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m" event={"ID":"57313bf3-1361-49f7-9a66-922b42ea36e7","Type":"ContainerStarted","Data":"6bbc5bc5145a9cd94f70c838b001b581d312b46590b5e3bfce46159ee2c00eab"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.402394 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m" event={"ID":"57313bf3-1361-49f7-9a66-922b42ea36e7","Type":"ContainerStarted","Data":"cb5912416c5ef26ff7807cbb5c77556e170cd839c5fdc26b4347fe76049859b5"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.445028 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:03 crc kubenswrapper[4775]: E1125 19:36:03.445714 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:03.945695379 +0000 UTC m=+145.862057745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.448324 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" event={"ID":"990bfa85-5063-451b-a3c1-13a918a2069d","Type":"ContainerStarted","Data":"0f419ac5b2c933c94ceddc3766aac3351f0fa4873fd87760907d5f84ad03943c"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.456071 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-pzkwp" event={"ID":"4c338a67-e4f2-49d1-a75b-1db89500dfd1","Type":"ContainerStarted","Data":"073be501cda4dd4316fb242f84c280fb575cf4509cf37cedc5bc63639235ee3b"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.456441 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.466145 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" event={"ID":"c4c88511-de83-4eb3-8e7e-b97271361717","Type":"ContainerStarted","Data":"ef5abf39f3cd06eea2aaf5d778d236fd2a76d24212074b746e9509ff024bcb0e"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.468695 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.473152 4775 patch_prober.go:28] interesting pod/console-operator-58897d9998-pzkwp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.473268 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pzkwp" podUID="4c338a67-e4f2-49d1-a75b-1db89500dfd1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.501121 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.501206 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.519068 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" event={"ID":"ac223290-b447-4d88-ba79-bc30253d3c27","Type":"ContainerStarted","Data":"58f8882b798119bacda46061d615a9675486ce0306872626b679ee079c3ef2a8"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.550008 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:03 crc kubenswrapper[4775]: E1125 19:36:03.558671 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:04.051907252 +0000 UTC m=+145.968269618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.560366 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" event={"ID":"0d24c230-e34e-4509-bba0-86d680714e25","Type":"ContainerStarted","Data":"e1612604b5571db1c546da3e63c1734418e4c3007cc1687f7e91ce2aa499bfa3"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.561431 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.580966 4775 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pdqsx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.581044 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" podUID="0d24c230-e34e-4509-bba0-86d680714e25" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.584426 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dctvs" podStartSLOduration=124.584409624 podStartE2EDuration="2m4.584409624s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:03.581361866 +0000 UTC m=+145.497724232" watchObservedRunningTime="2025-11-25 19:36:03.584409624 +0000 UTC m=+145.500771990" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.616226 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" event={"ID":"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f","Type":"ContainerStarted","Data":"aba29463d649bc4a3b296070c5c19e669e6d4d2cbbcc1e7d0d2a520c3d60e203"} Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.655121 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:03 crc kubenswrapper[4775]: E1125 19:36:03.660311 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:04.16018316 +0000 UTC m=+146.076545526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.681441 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gptl7" podStartSLOduration=124.681418822 podStartE2EDuration="2m4.681418822s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:03.679173582 +0000 UTC m=+145.595535948" watchObservedRunningTime="2025-11-25 19:36:03.681418822 +0000 UTC m=+145.597781188" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.681621 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" podStartSLOduration=123.681617959 podStartE2EDuration="2m3.681617959s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:03.644483623 +0000 UTC m=+145.560845999" watchObservedRunningTime="2025-11-25 19:36:03.681617959 +0000 UTC m=+145.597980325" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.720094 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wzwk5"] Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.729709 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds"] Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.740039 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g582m" podStartSLOduration=123.740010059 podStartE2EDuration="2m3.740010059s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:03.736840946 +0000 UTC m=+145.653203312" watchObservedRunningTime="2025-11-25 19:36:03.740010059 +0000 UTC m=+145.656372425" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.756419 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:03 crc kubenswrapper[4775]: E1125 19:36:03.756759 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:04.256741872 +0000 UTC m=+146.173104238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.787286 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-pzkwp" podStartSLOduration=124.787252822 podStartE2EDuration="2m4.787252822s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:03.786522497 +0000 UTC m=+145.702884873" watchObservedRunningTime="2025-11-25 19:36:03.787252822 +0000 UTC m=+145.703615178" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.833724 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m88v8"] Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.843312 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-vxrkp" podStartSLOduration=124.843288118 podStartE2EDuration="2m4.843288118s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:03.841738633 +0000 UTC m=+145.758100999" watchObservedRunningTime="2025-11-25 19:36:03.843288118 +0000 UTC m=+145.759650474" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.876438 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-q5ml6" podStartSLOduration=123.876409992 podStartE2EDuration="2m3.876409992s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:03.875077425 +0000 UTC m=+145.791439801" watchObservedRunningTime="2025-11-25 19:36:03.876409992 +0000 UTC m=+145.792772368" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.888212 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:03 crc kubenswrapper[4775]: W1125 19:36:03.889053 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e8cafb9_f419_4958_95f0_3e9ffd9031a5.slice/crio-329fe5d2237e943a0051000b3a0d4e76c32b3bcacb960261e34a0850006457dd WatchSource:0}: Error finding container 329fe5d2237e943a0051000b3a0d4e76c32b3bcacb960261e34a0850006457dd: Status 404 returned error can't find the container with id 329fe5d2237e943a0051000b3a0d4e76c32b3bcacb960261e34a0850006457dd Nov 25 19:36:03 crc kubenswrapper[4775]: E1125 19:36:03.889164 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:04.389148264 +0000 UTC m=+146.305510630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.964045 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8ldr6" podStartSLOduration=124.964011497 podStartE2EDuration="2m4.964011497s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:03.91075386 +0000 UTC m=+145.827116246" watchObservedRunningTime="2025-11-25 19:36:03.964011497 +0000 UTC m=+145.880373863" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.975685 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-2c2hp" podStartSLOduration=124.975638649 podStartE2EDuration="2m4.975638649s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:03.94407885 +0000 UTC m=+145.860441216" watchObservedRunningTime="2025-11-25 19:36:03.975638649 +0000 UTC m=+145.892001015" Nov 25 19:36:03 crc kubenswrapper[4775]: I1125 19:36:03.992853 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:03 crc kubenswrapper[4775]: E1125 19:36:03.994642 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:04.494602482 +0000 UTC m=+146.410964848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:03.997889 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:04 crc kubenswrapper[4775]: E1125 19:36:04.005954 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:04.505904612 +0000 UTC m=+146.422266998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.025791 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2n2wg" podStartSLOduration=125.025769846 podStartE2EDuration="2m5.025769846s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:04.024733679 +0000 UTC m=+145.941096045" watchObservedRunningTime="2025-11-25 19:36:04.025769846 +0000 UTC m=+145.942132212" Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.099846 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:04 crc kubenswrapper[4775]: E1125 19:36:04.100221 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:04.600207424 +0000 UTC m=+146.516569790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.099974 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" podStartSLOduration=125.099954685 podStartE2EDuration="2m5.099954685s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:04.099573132 +0000 UTC m=+146.015935498" watchObservedRunningTime="2025-11-25 19:36:04.099954685 +0000 UTC m=+146.016317051" Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.100413 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:04 crc kubenswrapper[4775]: E1125 19:36:04.100783 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:04.600767283 +0000 UTC m=+146.517129649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.171288 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" podStartSLOduration=125.171264942 podStartE2EDuration="2m5.171264942s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:04.134464247 +0000 UTC m=+146.050826613" watchObservedRunningTime="2025-11-25 19:36:04.171264942 +0000 UTC m=+146.087627308" Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.175217 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7"] Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.204259 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:04 crc kubenswrapper[4775]: E1125 19:36:04.204624 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:04.704604904 +0000 UTC m=+146.620967270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.228900 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7h68s"] Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.259581 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pbchx"] Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.270708 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-spkbv"] Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.296671 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mzblf"] Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.296759 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr"] Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.305784 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:04 crc kubenswrapper[4775]: E1125 19:36:04.306147 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:04.806130571 +0000 UTC m=+146.722492927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.316806 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb"] Nov 25 19:36:04 crc kubenswrapper[4775]: W1125 19:36:04.342917 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a318a01_2098_4719_839b_d3dee730659e.slice/crio-c465c6d27c2bce47800c5879095224be0cf3a99d0722e6cede70c4e8eeced9f9 WatchSource:0}: Error finding container c465c6d27c2bce47800c5879095224be0cf3a99d0722e6cede70c4e8eeced9f9: Status 404 returned error can't find the container with id c465c6d27c2bce47800c5879095224be0cf3a99d0722e6cede70c4e8eeced9f9 Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.416724 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:04 crc kubenswrapper[4775]: E1125 19:36:04.417474 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:04.917458247 +0000 UTC m=+146.833820613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.431379 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw"] Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.453753 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sl6bq"] Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.453822 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5h4vj"] Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.487125 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:04 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:04 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:04 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.487192 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.519357 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:04 crc kubenswrapper[4775]: E1125 19:36:04.519811 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:05.019794124 +0000 UTC m=+146.936156490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.623628 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:04 crc kubenswrapper[4775]: E1125 19:36:04.624431 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:05.124412551 +0000 UTC m=+147.040774917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.693112 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ts8lq" event={"ID":"00c6f5ae-339a-4349-841d-cfdc229a16b6","Type":"ContainerStarted","Data":"0551099186de34a1c74499a35dfadee6a009efdc0b7ced4d7ff5b2a807c77efd"} Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.715802 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" event={"ID":"6a318a01-2098-4719-839b-d3dee730659e","Type":"ContainerStarted","Data":"c465c6d27c2bce47800c5879095224be0cf3a99d0722e6cede70c4e8eeced9f9"} Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.728566 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-ts8lq" podStartSLOduration=6.7285397719999995 podStartE2EDuration="6.728539772s" podCreationTimestamp="2025-11-25 19:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:04.726387255 +0000 UTC m=+146.642749631" watchObservedRunningTime="2025-11-25 19:36:04.728539772 +0000 UTC m=+146.644902138" Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.728715 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:04 crc kubenswrapper[4775]: E1125 19:36:04.729575 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:05.229556017 +0000 UTC m=+147.145918383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.759835 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh" event={"ID":"12ce8247-daa8-42ce-90f4-b39317ca8583","Type":"ContainerStarted","Data":"d520542031062aa04d41fb579eb19ded26346315f5a8b34d42abdb918599e21c"} Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.772599 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" event={"ID":"c4c88511-de83-4eb3-8e7e-b97271361717","Type":"ContainerStarted","Data":"5d7c4a1b8f3103e25cf515f27b0218ca6b173d7f438f180f4f986e0145118344"} Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.772952 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.778847 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlhkz" event={"ID":"061c9ad9-8a37-4efb-b1eb-bdf8fa5d164f","Type":"ContainerStarted","Data":"1bad72e15c254a81b58b84ee22e235486905fdbad9b92f218bb107e16b3b9c22"} Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.780316 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" event={"ID":"3566ef9c-3d80-480e-b069-1ff60753877f","Type":"ContainerStarted","Data":"002250c77db16f0bb771231857cc900025b35196efc61293539be45edcb6144a"} Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.780946 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" event={"ID":"cee72ebe-1d37-4620-ab61-9f1a90a346c2","Type":"ContainerStarted","Data":"71317d4cd3e1a6eca0f4c0f4124225e2aba2154dbac850aef87b7a8b2ab88ae6"} Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.795940 4775 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8dhhr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.796023 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" podUID="c4c88511-de83-4eb3-8e7e-b97271361717" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.816129 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" event={"ID":"6d8d4329-a2ce-4a85-a1ed-059baf355aa7","Type":"ContainerStarted","Data":"530a0bb7981f740dbed3f0637176cafdc226637568cd0b11238f052680f82e77"} Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.816211 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" event={"ID":"6d8d4329-a2ce-4a85-a1ed-059baf355aa7","Type":"ContainerStarted","Data":"325f2d5989939fba0dc654d29cbb0f1e44ec332b045b22d4eed5463d184b769c"} Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.818598 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pbchx" event={"ID":"66879c07-13ef-4be1-b27b-d5d68d4d5b67","Type":"ContainerStarted","Data":"579db354d90a9ce1fe9efb9548cf6e4cb7a0903267841c98c6c66784303d5708"} Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.826202 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mzblf" event={"ID":"243f4849-ac50-4876-a24e-bbc936a16cf4","Type":"ContainerStarted","Data":"b779f4cd0d06ef858dd9ef01d109800997787e40783c8a2974076e7f387e9589"} Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.829307 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:04 crc kubenswrapper[4775]: E1125 19:36:04.830969 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:05.330949031 +0000 UTC m=+147.247311397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.842563 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ws2gh" podStartSLOduration=125.842540862 podStartE2EDuration="2m5.842540862s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:04.840092965 +0000 UTC m=+146.756455331" watchObservedRunningTime="2025-11-25 19:36:04.842540862 +0000 UTC m=+146.758903228" Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.913005 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" podStartSLOduration=125.912971958 podStartE2EDuration="2m5.912971958s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:04.9042859 +0000 UTC m=+146.820648276" watchObservedRunningTime="2025-11-25 19:36:04.912971958 +0000 UTC m=+146.829334334" Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.932735 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:04 crc kubenswrapper[4775]: E1125 19:36:04.933961 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:05.433945481 +0000 UTC m=+147.350307847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.964139 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7h68s" event={"ID":"7959f454-8db6-4c44-9d44-9b3b2862935f","Type":"ContainerStarted","Data":"3dab43e629ba15ea8b25802c99bed32879497fe0c8c2a76fd081f57c9c63933e"} Nov 25 19:36:04 crc kubenswrapper[4775]: I1125 19:36:04.981186 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" event={"ID":"fa227fc1-d306-45c7-908a-b1e39bd2971d","Type":"ContainerStarted","Data":"cd2932d195da7d2e07f38d9cd44f367c7af86b303713194d9eacd9fb3847aca5"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.034327 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:05 crc kubenswrapper[4775]: E1125 19:36:05.037636 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:05.537613715 +0000 UTC m=+147.453976081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.049512 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m88v8" event={"ID":"09c8c636-6cf7-44e5-b82c-e34e8385e895","Type":"ContainerStarted","Data":"e422a2d02447dc8cd352fd5f7e18f6c7c64371e1565cf66e504497921eb63ae9"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.049578 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m88v8" event={"ID":"09c8c636-6cf7-44e5-b82c-e34e8385e895","Type":"ContainerStarted","Data":"2e2b87ee62b51a76cce4e3e6cc7eee76f895e7c7da92e11fffaae3700eb00ee2"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.102237 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" event={"ID":"03363846-dbbf-41cd-9ecc-dd8dd93906c3","Type":"ContainerStarted","Data":"69363a5d45c6b366bc47a594d427b76d5d786fab59d39d755363ce90edc31f65"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.103607 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.111566 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlnhk" podStartSLOduration=126.111548535 podStartE2EDuration="2m6.111548535s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:05.024179859 +0000 UTC m=+146.940542225" watchObservedRunningTime="2025-11-25 19:36:05.111548535 +0000 UTC m=+147.027910901" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.113776 4775 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-f7vp7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.113834 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" podUID="03363846-dbbf-41cd-9ecc-dd8dd93906c3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.141113 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:05 crc kubenswrapper[4775]: E1125 19:36:05.142039 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:05.642019405 +0000 UTC m=+147.558381771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.155061 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" event={"ID":"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56","Type":"ContainerStarted","Data":"0636345eb71f2759fed8b44d14bbccb2eb7f30091ba23ab17bb7c63cd83668be"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.163944 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-m88v8" podStartSLOduration=7.163924762 podStartE2EDuration="7.163924762s" podCreationTimestamp="2025-11-25 19:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:05.110355033 +0000 UTC m=+147.026717399" watchObservedRunningTime="2025-11-25 19:36:05.163924762 +0000 UTC m=+147.080287128" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.165112 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" podStartSLOduration=125.165107214 podStartE2EDuration="2m5.165107214s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:05.162391198 +0000 UTC m=+147.078753564" watchObservedRunningTime="2025-11-25 19:36:05.165107214 +0000 UTC m=+147.081469580" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.244400 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xnxgj" event={"ID":"c181c9ca-b08f-41ed-b0bb-3fe1ef3f6ad6","Type":"ContainerStarted","Data":"3ea9e8168df209b52a86ec2f40c8cf05a6c76eb705c34d067e1f445d02a17d3f"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.246196 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.265363 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" event={"ID":"54dca2d8-8976-4d24-b97a-a9e867d0d74b","Type":"ContainerStarted","Data":"ea4952d722b0b1d2af214af868891d51f758da6e24774304d327816049486c0f"} Nov 25 19:36:05 crc kubenswrapper[4775]: E1125 19:36:05.250065 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:05.746464647 +0000 UTC m=+147.662827013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.265599 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:05 crc kubenswrapper[4775]: E1125 19:36:05.266355 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:05.766337291 +0000 UTC m=+147.682699657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.296549 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xnxgj" podStartSLOduration=126.296525801 podStartE2EDuration="2m6.296525801s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:05.29508852 +0000 UTC m=+147.211450886" watchObservedRunningTime="2025-11-25 19:36:05.296525801 +0000 UTC m=+147.212888167" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.368536 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:05 crc kubenswrapper[4775]: E1125 19:36:05.369716 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:05.869692624 +0000 UTC m=+147.786054990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.384032 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" event={"ID":"ac223290-b447-4d88-ba79-bc30253d3c27","Type":"ContainerStarted","Data":"7a1925cb6e18bfa1b52bf754969ce5acc587fa6b7a528d58995288bbbb87f040"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.448537 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8sh7l" podStartSLOduration=126.448501947 podStartE2EDuration="2m6.448501947s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:05.444814856 +0000 UTC m=+147.361177222" watchObservedRunningTime="2025-11-25 19:36:05.448501947 +0000 UTC m=+147.364864313" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.455778 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-76xlm" event={"ID":"fd6511bf-ce8c-40d6-913e-b28add158dee","Type":"ContainerStarted","Data":"0f1f9f2347a4e3f248f88f95b88ed72d76e20b80dc040ea70d52ad659e9562f5"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.471881 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:05 crc kubenswrapper[4775]: E1125 19:36:05.474178 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:05.974163366 +0000 UTC m=+147.890525732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.483435 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:05 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:05 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:05 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.483499 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.511712 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" event={"ID":"e7641661-a2a3-4eca-b5fd-892e7f60bcf4","Type":"ContainerStarted","Data":"55afad5bd2959e76ed7ae842561a3df3ea9723f27e4eb453e1ed523fd7fb62fd"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.541074 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" event={"ID":"18d7bda6-3eca-44ff-8ec8-95b62b889e89","Type":"ContainerStarted","Data":"a2eefe49fa0c9108309aab9edbd04dd77e095be38278ef3001b36a5dd4751edc"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.541457 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" event={"ID":"18d7bda6-3eca-44ff-8ec8-95b62b889e89","Type":"ContainerStarted","Data":"1b223bb6a2cf29c3a597f35d9d76166c5178c3de332d5a92bafa74c30857337b"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.575469 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:05 crc kubenswrapper[4775]: E1125 19:36:05.578416 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.07838793 +0000 UTC m=+147.994750296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.591083 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" event={"ID":"8e890c8a-b312-4d4a-9a86-98d9aa75a3c0","Type":"ContainerStarted","Data":"5e93879bc0f114d306320afbad2405849375c1bc30b47fd30f2f69e67c417e9c"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.594244 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-wzwk5" podStartSLOduration=125.594231622 podStartE2EDuration="2m5.594231622s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:05.592214861 +0000 UTC m=+147.508577227" watchObservedRunningTime="2025-11-25 19:36:05.594231622 +0000 UTC m=+147.510593988" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.620277 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" event={"ID":"e5e32e34-55d7-4513-a4d4-192be425e29f","Type":"ContainerStarted","Data":"be2279175af24b5ec9c1e276fa280c9f76f1112385db4b9be7f23f26f3ec9e26"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.620333 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" event={"ID":"e5e32e34-55d7-4513-a4d4-192be425e29f","Type":"ContainerStarted","Data":"3c99f763098f8e118f479a9259811d8eb8df43957c21d206039eb5cb9cda5088"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.638425 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qp4l6" podStartSLOduration=126.638405438 podStartE2EDuration="2m6.638405438s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:05.637044919 +0000 UTC m=+147.553407285" watchObservedRunningTime="2025-11-25 19:36:05.638405438 +0000 UTC m=+147.554767804" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.638963 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" event={"ID":"7e8cafb9-f419-4958-95f0-3e9ffd9031a5","Type":"ContainerStarted","Data":"329fe5d2237e943a0051000b3a0d4e76c32b3bcacb960261e34a0850006457dd"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.639011 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.656853 4775 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-m8wds container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.656925 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" podUID="7e8cafb9-f419-4958-95f0-3e9ffd9031a5" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.670096 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" event={"ID":"cf192fad-a167-4814-a144-d353f121e26a","Type":"ContainerStarted","Data":"f056d0ea73d5fa9de107fd0a7614a8d8f9052a7d6ef3b97f435214db1321dd97"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.670991 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.681703 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:05 crc kubenswrapper[4775]: E1125 19:36:05.682801 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.182780029 +0000 UTC m=+148.099142395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.696005 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" event={"ID":"74e16d71-813f-439f-a733-ce5d9ab3318c","Type":"ContainerStarted","Data":"74221e744f54e0a753bd40055f3b8517f95858373f80ad7b22eaef7d6a0d012b"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.708948 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.719317 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" event={"ID":"7ee4a869-0151-49bd-bde4-34be52d97b8d","Type":"ContainerStarted","Data":"7ca89451afa2250e6aa6a7bd2913dc063a690c700fb07ef6b304a29fe9a5f47f"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.720049 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.724565 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" podStartSLOduration=125.72454805 podStartE2EDuration="2m5.72454805s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:05.722227037 +0000 UTC m=+147.638589403" watchObservedRunningTime="2025-11-25 19:36:05.72454805 +0000 UTC m=+147.640910416" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.736218 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sl6bq" event={"ID":"63343f9f-b1cf-43a2-9879-34ba51820dae","Type":"ContainerStarted","Data":"5b94455c867da5f36dcd69ccbd64acaab1049a879343ec4969786c44ff31699b"} Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.737443 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.754863 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.764345 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.780200 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-pzkwp" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.782847 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.790491 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fsh6m" podStartSLOduration=125.790469086 podStartE2EDuration="2m5.790469086s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:05.790367322 +0000 UTC m=+147.706729688" watchObservedRunningTime="2025-11-25 19:36:05.790469086 +0000 UTC m=+147.706831452" Nov 25 19:36:05 crc kubenswrapper[4775]: E1125 19:36:05.785288 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.285255011 +0000 UTC m=+148.201617377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.794857 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:05 crc kubenswrapper[4775]: E1125 19:36:05.796799 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.29677936 +0000 UTC m=+148.213141726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.867207 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" podStartSLOduration=126.867182635 podStartE2EDuration="2m6.867182635s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:05.866144968 +0000 UTC m=+147.782507334" watchObservedRunningTime="2025-11-25 19:36:05.867182635 +0000 UTC m=+147.783544991" Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.919958 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:05 crc kubenswrapper[4775]: E1125 19:36:05.920786 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.420766033 +0000 UTC m=+148.337128399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:05 crc kubenswrapper[4775]: I1125 19:36:05.993354 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" podStartSLOduration=125.993326615 podStartE2EDuration="2m5.993326615s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:05.921841562 +0000 UTC m=+147.838203928" watchObservedRunningTime="2025-11-25 19:36:05.993326615 +0000 UTC m=+147.909688981" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.031102 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:06 crc kubenswrapper[4775]: E1125 19:36:06.031488 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.531475307 +0000 UTC m=+148.447837673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.075685 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" podStartSLOduration=126.075631272 podStartE2EDuration="2m6.075631272s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:06.004352276 +0000 UTC m=+147.920714642" watchObservedRunningTime="2025-11-25 19:36:06.075631272 +0000 UTC m=+147.991993638" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.132758 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:06 crc kubenswrapper[4775]: E1125 19:36:06.133103 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.633068078 +0000 UTC m=+148.549430444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.133330 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:06 crc kubenswrapper[4775]: E1125 19:36:06.133760 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.633753242 +0000 UTC m=+148.550115608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.235234 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:06 crc kubenswrapper[4775]: E1125 19:36:06.235691 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.735668214 +0000 UTC m=+148.652030580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.337071 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:06 crc kubenswrapper[4775]: E1125 19:36:06.337674 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.837629687 +0000 UTC m=+148.753992253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.440260 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:06 crc kubenswrapper[4775]: E1125 19:36:06.440792 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:06.940776473 +0000 UTC m=+148.857138829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.481202 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:06 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:06 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:06 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.481578 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.542614 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:06 crc kubenswrapper[4775]: E1125 19:36:06.543056 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:07.043043087 +0000 UTC m=+148.959405453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.645865 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:06 crc kubenswrapper[4775]: E1125 19:36:06.646095 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:07.146062978 +0000 UTC m=+149.062425334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.646323 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:06 crc kubenswrapper[4775]: E1125 19:36:06.646674 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:07.146640819 +0000 UTC m=+149.063003185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.747249 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" event={"ID":"d8d8d255-df7c-41c3-a1ac-e8ce91afcc56","Type":"ContainerStarted","Data":"ed0c872851fb58fc0777d3c7ac5d7400727b0f18e7cbd8a41bb09f74c69c1f96"} Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.747423 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.747597 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.747624 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.747697 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.747753 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:36:06 crc kubenswrapper[4775]: E1125 19:36:06.751135 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:07.25110476 +0000 UTC m=+149.167467126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.751776 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.757747 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.758887 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.762559 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.777547 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" event={"ID":"ec7a3c48-29be-4d48-b897-1b84a51e1583","Type":"ContainerStarted","Data":"2cbe2892278b3b6724db4078fa326b30ad3508572a80000f6c53e79b1adfc991"} Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.789057 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kncmq" podStartSLOduration=127.789037504 podStartE2EDuration="2m7.789037504s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:06.788240587 +0000 UTC m=+148.704602963" watchObservedRunningTime="2025-11-25 19:36:06.789037504 +0000 UTC m=+148.705399890" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.791697 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-spkbv" event={"ID":"74e16d71-813f-439f-a733-ce5d9ab3318c","Type":"ContainerStarted","Data":"38db74461877aeb2261306933c0e197e5b2617a99e4055bf9f8ed5103ef57166"} Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.816090 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sl6bq" event={"ID":"63343f9f-b1cf-43a2-9879-34ba51820dae","Type":"ContainerStarted","Data":"ab4fae7c670c0e498364bab5da80fae663069c1069b12acd1108e28addb1e562"} Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.832845 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7h68s" event={"ID":"7959f454-8db6-4c44-9d44-9b3b2862935f","Type":"ContainerStarted","Data":"b7d58246a2da6debdcce6936944b860609d4e7c13d21d5e332732e26ea633676"} Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.833900 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7h68s" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.836074 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" event={"ID":"03363846-dbbf-41cd-9ecc-dd8dd93906c3","Type":"ContainerStarted","Data":"3cb2751e084992fbb93af7e2845909319f6afdb9915821fb912b21caf6d760d9"} Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.836781 4775 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-f7vp7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.836826 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" podUID="03363846-dbbf-41cd-9ecc-dd8dd93906c3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.842516 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" event={"ID":"cee72ebe-1d37-4620-ab61-9f1a90a346c2","Type":"ContainerStarted","Data":"94cca129dba637a1c3eb7ee09fd9db0380a880972852e3f3f0583a6abe86230c"} Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.843469 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-7h68s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.843535 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7h68s" podUID="7959f454-8db6-4c44-9d44-9b3b2862935f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.851147 4775 generic.go:334] "Generic (PLEG): container finished" podID="e7641661-a2a3-4eca-b5fd-892e7f60bcf4" containerID="7cf3c533918c7058b81857aa63829bdd936951b64c72563b8a70bf009dd5fc71" exitCode=0 Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.852455 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.852818 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" podStartSLOduration=126.852794554 podStartE2EDuration="2m6.852794554s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:06.851595192 +0000 UTC m=+148.767957568" watchObservedRunningTime="2025-11-25 19:36:06.852794554 +0000 UTC m=+148.769156920" Nov 25 19:36:06 crc kubenswrapper[4775]: E1125 19:36:06.855184 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:07.355162089 +0000 UTC m=+149.271524455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.871945 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.885991 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.895901 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfldr" podStartSLOduration=126.895873521 podStartE2EDuration="2m6.895873521s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:06.893070032 +0000 UTC m=+148.809432398" watchObservedRunningTime="2025-11-25 19:36:06.895873521 +0000 UTC m=+148.812235887" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.897829 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.970533 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-7h68s" podStartSLOduration=127.970511236 podStartE2EDuration="2m7.970511236s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:06.9703297 +0000 UTC m=+148.886692066" watchObservedRunningTime="2025-11-25 19:36:06.970511236 +0000 UTC m=+148.886873602" Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.970815 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:06 crc kubenswrapper[4775]: E1125 19:36:06.972313 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:07.472293599 +0000 UTC m=+149.388656035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.987239 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" event={"ID":"e7641661-a2a3-4eca-b5fd-892e7f60bcf4","Type":"ContainerStarted","Data":"43e3f646bf74be7a856983d9c79f72816962e19345688e5c08c80d0aae6fb96d"} Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.987320 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" event={"ID":"e7641661-a2a3-4eca-b5fd-892e7f60bcf4","Type":"ContainerDied","Data":"7cf3c533918c7058b81857aa63829bdd936951b64c72563b8a70bf009dd5fc71"} Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.987339 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mzblf" event={"ID":"243f4849-ac50-4876-a24e-bbc936a16cf4","Type":"ContainerStarted","Data":"2316a2483d56099be88db8b119139582053c38f448a7c004adcb4eec7fef9757"} Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.987384 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mzblf" event={"ID":"243f4849-ac50-4876-a24e-bbc936a16cf4","Type":"ContainerStarted","Data":"251b124139ab915c9c676eca55ee6b4bf6ca4b91755dacebeb53a45f87741cff"} Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.987410 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h9fsn" event={"ID":"e5e32e34-55d7-4513-a4d4-192be425e29f","Type":"ContainerStarted","Data":"e9d5a77fe7135d13d14a67171560093f6ebb49fdf84999d1dab51438a57942ba"} Nov 25 19:36:06 crc kubenswrapper[4775]: I1125 19:36:06.998363 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-76xlm" event={"ID":"fd6511bf-ce8c-40d6-913e-b28add158dee","Type":"ContainerStarted","Data":"f2ce7faade91b052eaf43734dd9d6246bbccb74ad2cdea7ba55f979631fe7f7c"} Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.002914 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" event={"ID":"54dca2d8-8976-4d24-b97a-a9e867d0d74b","Type":"ContainerStarted","Data":"9a91074107a17fcce56a70d39828d5158e5bd69aad20fac31d3f42d29a5adfed"} Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.012888 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pbchx" event={"ID":"66879c07-13ef-4be1-b27b-d5d68d4d5b67","Type":"ContainerStarted","Data":"0a7bec469934f2a03e687b7f82acb596eeca8f4fa1d8c7c54300d2bbb14c694b"} Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.012935 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pbchx" event={"ID":"66879c07-13ef-4be1-b27b-d5d68d4d5b67","Type":"ContainerStarted","Data":"feb64311057e35687556777907787b21a1139f7843115691fe231364568a27d2"} Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.017627 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-mzblf" podStartSLOduration=127.017612656 podStartE2EDuration="2m7.017612656s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:07.016780896 +0000 UTC m=+148.933143262" watchObservedRunningTime="2025-11-25 19:36:07.017612656 +0000 UTC m=+148.933975022" Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.061246 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" event={"ID":"6a318a01-2098-4719-839b-d3dee730659e","Type":"ContainerStarted","Data":"2d8211646fd4495bd139cae65cf173eb4c15eebda165ee9937e30e0a45ccf5cd"} Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.061312 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" event={"ID":"6a318a01-2098-4719-839b-d3dee730659e","Type":"ContainerStarted","Data":"5ba00d8e6461fb87a965381a2c9d1665b0b4b682a9b9da9bf1980cf5535880d0"} Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.062144 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" Nov 25 19:36:07 crc kubenswrapper[4775]: E1125 19:36:07.073132 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:07.573116632 +0000 UTC m=+149.489478988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.074187 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.080430 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pbchx" podStartSLOduration=127.08040843 podStartE2EDuration="2m7.08040843s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:07.078332187 +0000 UTC m=+148.994694553" watchObservedRunningTime="2025-11-25 19:36:07.08040843 +0000 UTC m=+148.996770796" Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.126784 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" event={"ID":"6d8d4329-a2ce-4a85-a1ed-059baf355aa7","Type":"ContainerStarted","Data":"25ea70f11cc461f24dbc1f7017c70a360bb0a1b5fe8e71fe6e680fdcd99e3a0a"} Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.130071 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" podStartSLOduration=127.13005778 podStartE2EDuration="2m7.13005778s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:07.12807805 +0000 UTC m=+149.044440416" watchObservedRunningTime="2025-11-25 19:36:07.13005778 +0000 UTC m=+149.046420146" Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.139440 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" event={"ID":"3566ef9c-3d80-480e-b069-1ff60753877f","Type":"ContainerStarted","Data":"faf8caab22e1737baddc8abc010b031989665f95cfcfad0880cff713cf4399c1"} Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.140660 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.150852 4775 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5h4vj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.150926 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" podUID="3566ef9c-3d80-480e-b069-1ff60753877f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.166002 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" event={"ID":"7e8cafb9-f419-4958-95f0-3e9ffd9031a5","Type":"ContainerStarted","Data":"38f900dd3ae0e37efa9b9dbba6dcf95ac5bf5044e10a8660b515bc4e9e1d9e38"} Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.178188 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.178391 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:07 crc kubenswrapper[4775]: E1125 19:36:07.178710 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:07.678689784 +0000 UTC m=+149.595052150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.190582 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l975q" Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.226614 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mh7qn" podStartSLOduration=127.226581241 podStartE2EDuration="2m7.226581241s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:07.165189465 +0000 UTC m=+149.081551831" watchObservedRunningTime="2025-11-25 19:36:07.226581241 +0000 UTC m=+149.142943607" Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.257765 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" podStartSLOduration=127.257731655 podStartE2EDuration="2m7.257731655s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:07.214771192 +0000 UTC m=+149.131133558" watchObservedRunningTime="2025-11-25 19:36:07.257731655 +0000 UTC m=+149.174094021" Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.310604 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:07 crc kubenswrapper[4775]: E1125 19:36:07.325252 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:07.825222527 +0000 UTC m=+149.741584953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.412667 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:07 crc kubenswrapper[4775]: E1125 19:36:07.412956 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:07.912935185 +0000 UTC m=+149.829297551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.506517 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:07 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:07 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:07 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.507004 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.513957 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:07 crc kubenswrapper[4775]: E1125 19:36:07.514447 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:08.014429692 +0000 UTC m=+149.930792058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.618153 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:07 crc kubenswrapper[4775]: E1125 19:36:07.618528 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:08.118507501 +0000 UTC m=+150.034869867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.722490 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:07 crc kubenswrapper[4775]: E1125 19:36:07.722951 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:08.222937171 +0000 UTC m=+150.139299537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.805539 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" podStartSLOduration=127.805518958 podStartE2EDuration="2m7.805518958s" podCreationTimestamp="2025-11-25 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:07.352062978 +0000 UTC m=+149.268425344" watchObservedRunningTime="2025-11-25 19:36:07.805518958 +0000 UTC m=+149.721881324" Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.825714 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:07 crc kubenswrapper[4775]: E1125 19:36:07.825950 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:08.325912811 +0000 UTC m=+150.242275177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.826057 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:07 crc kubenswrapper[4775]: E1125 19:36:07.826554 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:08.326535333 +0000 UTC m=+150.242897699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:07 crc kubenswrapper[4775]: W1125 19:36:07.850972 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-be0852e198814390662f6c5011d0df516933c1e49d4c09ab2caebce2f936870f WatchSource:0}: Error finding container be0852e198814390662f6c5011d0df516933c1e49d4c09ab2caebce2f936870f: Status 404 returned error can't find the container with id be0852e198814390662f6c5011d0df516933c1e49d4c09ab2caebce2f936870f Nov 25 19:36:07 crc kubenswrapper[4775]: I1125 19:36:07.928484 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:07 crc kubenswrapper[4775]: E1125 19:36:07.928811 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:08.428791007 +0000 UTC m=+150.345153373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.033405 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:08 crc kubenswrapper[4775]: E1125 19:36:08.033853 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:08.533837809 +0000 UTC m=+150.450200185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.136932 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:08 crc kubenswrapper[4775]: E1125 19:36:08.137834 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:08.637810315 +0000 UTC m=+150.554172681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.166870 4775 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-m8wds container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.166942 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" podUID="7e8cafb9-f419-4958-95f0-3e9ffd9031a5" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.211469 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" event={"ID":"e7641661-a2a3-4eca-b5fd-892e7f60bcf4","Type":"ContainerStarted","Data":"ec06deaea8298322ca5d0df2f06e1a5a4840476b64b27a0b025364a55b88403b"} Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.235928 4775 generic.go:334] "Generic (PLEG): container finished" podID="54dca2d8-8976-4d24-b97a-a9e867d0d74b" containerID="9a91074107a17fcce56a70d39828d5158e5bd69aad20fac31d3f42d29a5adfed" exitCode=0 Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.236172 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" event={"ID":"54dca2d8-8976-4d24-b97a-a9e867d0d74b","Type":"ContainerDied","Data":"9a91074107a17fcce56a70d39828d5158e5bd69aad20fac31d3f42d29a5adfed"} Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.240781 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:08 crc kubenswrapper[4775]: E1125 19:36:08.243145 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:08.743131257 +0000 UTC m=+150.659493623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.251895 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"072c9f07c047874d06ea2d18a579a1086950fda615f52c11cf19ec2334c26dea"} Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.252512 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.256263 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" podStartSLOduration=129.256247442 podStartE2EDuration="2m9.256247442s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:08.243537361 +0000 UTC m=+150.159899727" watchObservedRunningTime="2025-11-25 19:36:08.256247442 +0000 UTC m=+150.172609808" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.281034 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sl6bq" event={"ID":"63343f9f-b1cf-43a2-9879-34ba51820dae","Type":"ContainerStarted","Data":"eb35f097d91b689d5a3dcd7cbc8d63fbe0979d3e02b7db85c1536d7cc9baeb27"} Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.281978 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sl6bq" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.295830 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2c33b744ddf6afa7ff8c69676c58f7fee1271fc8893640e38896ef57d5d8d2a4"} Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.308882 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"be0852e198814390662f6c5011d0df516933c1e49d4c09ab2caebce2f936870f"} Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.310530 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-7h68s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.310570 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7h68s" podUID="7959f454-8db6-4c44-9d44-9b3b2862935f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.310972 4775 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5h4vj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.310996 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" podUID="3566ef9c-3d80-480e-b069-1ff60753877f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.324858 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f7vp7" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.333863 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sl6bq" podStartSLOduration=10.333840572 podStartE2EDuration="10.333840572s" podCreationTimestamp="2025-11-25 19:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:08.333120866 +0000 UTC m=+150.249483232" watchObservedRunningTime="2025-11-25 19:36:08.333840572 +0000 UTC m=+150.250202938" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.342824 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:08 crc kubenswrapper[4775]: E1125 19:36:08.344768 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:08.844741948 +0000 UTC m=+150.761104314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.446174 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:08 crc kubenswrapper[4775]: E1125 19:36:08.446588 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:08.946573467 +0000 UTC m=+150.862935833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.472928 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:08 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:08 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:08 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.473021 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.520021 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m8wds" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.542936 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.547508 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:08 crc kubenswrapper[4775]: E1125 19:36:08.547953 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:09.04793597 +0000 UTC m=+150.964298336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.650005 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:08 crc kubenswrapper[4775]: E1125 19:36:08.650373 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:09.150357639 +0000 UTC m=+151.066720005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.697848 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hnvt5"] Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.699272 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.712004 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hnvt5"] Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.720890 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.756378 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.756661 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkkjn\" (UniqueName: \"kubernetes.io/projected/3516b667-e83c-45a5-9f21-6bf5e0572b9a-kube-api-access-vkkjn\") pod \"community-operators-hnvt5\" (UID: \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\") " pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.756709 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3516b667-e83c-45a5-9f21-6bf5e0572b9a-utilities\") pod \"community-operators-hnvt5\" (UID: \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\") " pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.756750 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3516b667-e83c-45a5-9f21-6bf5e0572b9a-catalog-content\") pod \"community-operators-hnvt5\" (UID: \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\") " pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:36:08 crc kubenswrapper[4775]: E1125 19:36:08.756886 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:09.256861804 +0000 UTC m=+151.173224170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.857781 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkkjn\" (UniqueName: \"kubernetes.io/projected/3516b667-e83c-45a5-9f21-6bf5e0572b9a-kube-api-access-vkkjn\") pod \"community-operators-hnvt5\" (UID: \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\") " pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.857837 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3516b667-e83c-45a5-9f21-6bf5e0572b9a-utilities\") pod \"community-operators-hnvt5\" (UID: \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\") " pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.857869 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3516b667-e83c-45a5-9f21-6bf5e0572b9a-catalog-content\") pod \"community-operators-hnvt5\" (UID: \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\") " pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.857897 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:08 crc kubenswrapper[4775]: E1125 19:36:08.858246 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 19:36:09.358231026 +0000 UTC m=+151.274593392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-75q9h" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.858383 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3516b667-e83c-45a5-9f21-6bf5e0572b9a-catalog-content\") pod \"community-operators-hnvt5\" (UID: \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\") " pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.858459 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3516b667-e83c-45a5-9f21-6bf5e0572b9a-utilities\") pod \"community-operators-hnvt5\" (UID: \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\") " pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.882076 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkkjn\" (UniqueName: \"kubernetes.io/projected/3516b667-e83c-45a5-9f21-6bf5e0572b9a-kube-api-access-vkkjn\") pod \"community-operators-hnvt5\" (UID: \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\") " pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.903428 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jsmsk"] Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.907816 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.915881 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsmsk"] Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.929910 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.959306 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.959592 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwsxx\" (UniqueName: \"kubernetes.io/projected/25f6b7d2-1661-4d49-8648-2f665206c2e9-kube-api-access-wwsxx\") pod \"certified-operators-jsmsk\" (UID: \"25f6b7d2-1661-4d49-8648-2f665206c2e9\") " pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.959832 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f6b7d2-1661-4d49-8648-2f665206c2e9-utilities\") pod \"certified-operators-jsmsk\" (UID: \"25f6b7d2-1661-4d49-8648-2f665206c2e9\") " pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:36:08 crc kubenswrapper[4775]: I1125 19:36:08.959863 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f6b7d2-1661-4d49-8648-2f665206c2e9-catalog-content\") pod \"certified-operators-jsmsk\" (UID: \"25f6b7d2-1661-4d49-8648-2f665206c2e9\") " pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:36:08 crc kubenswrapper[4775]: E1125 19:36:08.959994 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 19:36:09.459977942 +0000 UTC m=+151.376340308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.009818 4775 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.015445 4775 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-25T19:36:09.00987805Z","Handler":null,"Name":""} Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.018735 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.024773 4775 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.026225 4775 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.060883 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f6b7d2-1661-4d49-8648-2f665206c2e9-catalog-content\") pod \"certified-operators-jsmsk\" (UID: \"25f6b7d2-1661-4d49-8648-2f665206c2e9\") " pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.061004 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.061035 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwsxx\" (UniqueName: \"kubernetes.io/projected/25f6b7d2-1661-4d49-8648-2f665206c2e9-kube-api-access-wwsxx\") pod \"certified-operators-jsmsk\" (UID: \"25f6b7d2-1661-4d49-8648-2f665206c2e9\") " pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.061053 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f6b7d2-1661-4d49-8648-2f665206c2e9-utilities\") pod \"certified-operators-jsmsk\" (UID: \"25f6b7d2-1661-4d49-8648-2f665206c2e9\") " pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.061397 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f6b7d2-1661-4d49-8648-2f665206c2e9-catalog-content\") pod \"certified-operators-jsmsk\" (UID: \"25f6b7d2-1661-4d49-8648-2f665206c2e9\") " pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.062015 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f6b7d2-1661-4d49-8648-2f665206c2e9-utilities\") pod \"certified-operators-jsmsk\" (UID: \"25f6b7d2-1661-4d49-8648-2f665206c2e9\") " pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.064012 4775 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.064056 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.102727 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwsxx\" (UniqueName: \"kubernetes.io/projected/25f6b7d2-1661-4d49-8648-2f665206c2e9-kube-api-access-wwsxx\") pod \"certified-operators-jsmsk\" (UID: \"25f6b7d2-1661-4d49-8648-2f665206c2e9\") " pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.106348 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7kd5c"] Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.107321 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.122203 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7kd5c"] Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.130264 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-75q9h\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.162581 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.163021 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-utilities\") pod \"community-operators-7kd5c\" (UID: \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\") " pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.163093 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-catalog-content\") pod \"community-operators-7kd5c\" (UID: \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\") " pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.163116 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89nks\" (UniqueName: \"kubernetes.io/projected/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-kube-api-access-89nks\") pod \"community-operators-7kd5c\" (UID: \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\") " pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.195409 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.218211 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.219161 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.220149 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.226125 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.232011 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.232130 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.241140 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.265089 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-utilities\") pod \"community-operators-7kd5c\" (UID: \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\") " pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.265140 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7414afc8-2fa8-45ab-8f5e-4898caf58072-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7414afc8-2fa8-45ab-8f5e-4898caf58072\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.265184 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7414afc8-2fa8-45ab-8f5e-4898caf58072-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7414afc8-2fa8-45ab-8f5e-4898caf58072\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.265223 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-catalog-content\") pod \"community-operators-7kd5c\" (UID: \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\") " pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.265245 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89nks\" (UniqueName: \"kubernetes.io/projected/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-kube-api-access-89nks\") pod \"community-operators-7kd5c\" (UID: \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\") " pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.265999 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-utilities\") pod \"community-operators-7kd5c\" (UID: \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\") " pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.266272 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-catalog-content\") pod \"community-operators-7kd5c\" (UID: \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\") " pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.289020 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rjqh5"] Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.290275 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.307658 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rjqh5"] Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.309014 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89nks\" (UniqueName: \"kubernetes.io/projected/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-kube-api-access-89nks\") pod \"community-operators-7kd5c\" (UID: \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\") " pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.329992 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ddce30c84b0d4f020a993feb5f32708530cb65b6f6757531325829808003e108"} Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.332015 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5269eb782f7a47ce795c06ceda668ddb782ed417fa4f6e9417ea6f7341a4a8a9"} Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.333803 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"317e35ef25e05850b4ddcc1709c4474151f4b4eec77ad66836336ca69c46e59c"} Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.340610 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-76xlm" event={"ID":"fd6511bf-ce8c-40d6-913e-b28add158dee","Type":"ContainerStarted","Data":"3bb8bd7d7d602307f855015a3459136e676b69d1626e4ed9fe331b7192b08288"} Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.340701 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-76xlm" event={"ID":"fd6511bf-ce8c-40d6-913e-b28add158dee","Type":"ContainerStarted","Data":"c708540554ed73f12b2792b76e837d84c2cab4730bb63de44523485873a0e65a"} Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.340714 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-76xlm" event={"ID":"fd6511bf-ce8c-40d6-913e-b28add158dee","Type":"ContainerStarted","Data":"60fe853c54fecdbdfe25e47d9230ec6ed3063e6795084abf17f298a94aa94285"} Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.341527 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-7h68s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.341565 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7h68s" podUID="7959f454-8db6-4c44-9d44-9b3b2862935f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.347158 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.367099 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7414afc8-2fa8-45ab-8f5e-4898caf58072-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7414afc8-2fa8-45ab-8f5e-4898caf58072\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.367222 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7414afc8-2fa8-45ab-8f5e-4898caf58072-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7414afc8-2fa8-45ab-8f5e-4898caf58072\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.367258 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/394f2d01-bb7b-49b6-95f3-5430b4987766-catalog-content\") pod \"certified-operators-rjqh5\" (UID: \"394f2d01-bb7b-49b6-95f3-5430b4987766\") " pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.367340 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9szv\" (UniqueName: \"kubernetes.io/projected/394f2d01-bb7b-49b6-95f3-5430b4987766-kube-api-access-n9szv\") pod \"certified-operators-rjqh5\" (UID: \"394f2d01-bb7b-49b6-95f3-5430b4987766\") " pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.367538 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/394f2d01-bb7b-49b6-95f3-5430b4987766-utilities\") pod \"certified-operators-rjqh5\" (UID: \"394f2d01-bb7b-49b6-95f3-5430b4987766\") " pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.370096 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7414afc8-2fa8-45ab-8f5e-4898caf58072-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7414afc8-2fa8-45ab-8f5e-4898caf58072\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.400294 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hnvt5"] Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.416088 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-76xlm" podStartSLOduration=11.416065776 podStartE2EDuration="11.416065776s" podCreationTimestamp="2025-11-25 19:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:09.410303921 +0000 UTC m=+151.326666277" watchObservedRunningTime="2025-11-25 19:36:09.416065776 +0000 UTC m=+151.332428142" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.420039 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7414afc8-2fa8-45ab-8f5e-4898caf58072-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7414afc8-2fa8-45ab-8f5e-4898caf58072\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.440272 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:36:09 crc kubenswrapper[4775]: W1125 19:36:09.453947 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3516b667_e83c_45a5_9f21_6bf5e0572b9a.slice/crio-eb407860ca2a5645cde81d716d4d920e2221ea58587bba6f827e07570da317fc WatchSource:0}: Error finding container eb407860ca2a5645cde81d716d4d920e2221ea58587bba6f827e07570da317fc: Status 404 returned error can't find the container with id eb407860ca2a5645cde81d716d4d920e2221ea58587bba6f827e07570da317fc Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.469997 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/394f2d01-bb7b-49b6-95f3-5430b4987766-catalog-content\") pod \"certified-operators-rjqh5\" (UID: \"394f2d01-bb7b-49b6-95f3-5430b4987766\") " pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.470051 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9szv\" (UniqueName: \"kubernetes.io/projected/394f2d01-bb7b-49b6-95f3-5430b4987766-kube-api-access-n9szv\") pod \"certified-operators-rjqh5\" (UID: \"394f2d01-bb7b-49b6-95f3-5430b4987766\") " pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.470097 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/394f2d01-bb7b-49b6-95f3-5430b4987766-utilities\") pod \"certified-operators-rjqh5\" (UID: \"394f2d01-bb7b-49b6-95f3-5430b4987766\") " pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.470619 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/394f2d01-bb7b-49b6-95f3-5430b4987766-utilities\") pod \"certified-operators-rjqh5\" (UID: \"394f2d01-bb7b-49b6-95f3-5430b4987766\") " pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.470848 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/394f2d01-bb7b-49b6-95f3-5430b4987766-catalog-content\") pod \"certified-operators-rjqh5\" (UID: \"394f2d01-bb7b-49b6-95f3-5430b4987766\") " pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.485534 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:09 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:09 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:09 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.485594 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.508218 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9szv\" (UniqueName: \"kubernetes.io/projected/394f2d01-bb7b-49b6-95f3-5430b4987766-kube-api-access-n9szv\") pod \"certified-operators-rjqh5\" (UID: \"394f2d01-bb7b-49b6-95f3-5430b4987766\") " pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.556118 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.624132 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-75q9h"] Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.628781 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:36:09 crc kubenswrapper[4775]: W1125 19:36:09.664929 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca4b44ae_0ced_4acf_aa65_92a6fda3f98e.slice/crio-bc852cdebed05722143dfb25b1a2d01fe742487a7a5669b0a3da1504c5016669 WatchSource:0}: Error finding container bc852cdebed05722143dfb25b1a2d01fe742487a7a5669b0a3da1504c5016669: Status 404 returned error can't find the container with id bc852cdebed05722143dfb25b1a2d01fe742487a7a5669b0a3da1504c5016669 Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.766874 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsmsk"] Nov 25 19:36:09 crc kubenswrapper[4775]: W1125 19:36:09.839182 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25f6b7d2_1661_4d49_8648_2f665206c2e9.slice/crio-c0c50cd4e9fbaf98c08713dfe9e13ce05f9dd6a1d630bad16995491d735ba540 WatchSource:0}: Error finding container c0c50cd4e9fbaf98c08713dfe9e13ce05f9dd6a1d630bad16995491d735ba540: Status 404 returned error can't find the container with id c0c50cd4e9fbaf98c08713dfe9e13ce05f9dd6a1d630bad16995491d735ba540 Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.841076 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.863767 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7kd5c"] Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.885249 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54dca2d8-8976-4d24-b97a-a9e867d0d74b-secret-volume\") pod \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\" (UID: \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\") " Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.885388 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54dca2d8-8976-4d24-b97a-a9e867d0d74b-config-volume\") pod \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\" (UID: \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\") " Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.885455 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2pl4\" (UniqueName: \"kubernetes.io/projected/54dca2d8-8976-4d24-b97a-a9e867d0d74b-kube-api-access-m2pl4\") pod \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\" (UID: \"54dca2d8-8976-4d24-b97a-a9e867d0d74b\") " Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.886677 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54dca2d8-8976-4d24-b97a-a9e867d0d74b-config-volume" (OuterVolumeSpecName: "config-volume") pod "54dca2d8-8976-4d24-b97a-a9e867d0d74b" (UID: "54dca2d8-8976-4d24-b97a-a9e867d0d74b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.898574 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54dca2d8-8976-4d24-b97a-a9e867d0d74b-kube-api-access-m2pl4" (OuterVolumeSpecName: "kube-api-access-m2pl4") pod "54dca2d8-8976-4d24-b97a-a9e867d0d74b" (UID: "54dca2d8-8976-4d24-b97a-a9e867d0d74b"). InnerVolumeSpecName "kube-api-access-m2pl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:36:09 crc kubenswrapper[4775]: I1125 19:36:09.902393 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54dca2d8-8976-4d24-b97a-a9e867d0d74b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "54dca2d8-8976-4d24-b97a-a9e867d0d74b" (UID: "54dca2d8-8976-4d24-b97a-a9e867d0d74b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:09.991753 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54dca2d8-8976-4d24-b97a-a9e867d0d74b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:09.991795 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54dca2d8-8976-4d24-b97a-a9e867d0d74b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:09.991806 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2pl4\" (UniqueName: \"kubernetes.io/projected/54dca2d8-8976-4d24-b97a-a9e867d0d74b-kube-api-access-m2pl4\") on node \"crc\" DevicePath \"\"" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.021941 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.296014 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rjqh5"] Nov 25 19:36:10 crc kubenswrapper[4775]: W1125 19:36:10.310243 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod394f2d01_bb7b_49b6_95f3_5430b4987766.slice/crio-c3cce0371c2c2dad56a4f4de9bf721aed26633d3dfc2cb825f7738f7f48f21f9 WatchSource:0}: Error finding container c3cce0371c2c2dad56a4f4de9bf721aed26633d3dfc2cb825f7738f7f48f21f9: Status 404 returned error can't find the container with id c3cce0371c2c2dad56a4f4de9bf721aed26633d3dfc2cb825f7738f7f48f21f9 Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.350394 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7414afc8-2fa8-45ab-8f5e-4898caf58072","Type":"ContainerStarted","Data":"9d06c3fffaafc54b1e0aa4e30e1893cc5273b0a74b1992be4f3f7128e0a2551e"} Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.352593 4775 generic.go:334] "Generic (PLEG): container finished" podID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" containerID="eedd9def46401a4f88bd79846419d037d980182c10611505f69a85d19134678a" exitCode=0 Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.352676 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hnvt5" event={"ID":"3516b667-e83c-45a5-9f21-6bf5e0572b9a","Type":"ContainerDied","Data":"eedd9def46401a4f88bd79846419d037d980182c10611505f69a85d19134678a"} Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.352714 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hnvt5" event={"ID":"3516b667-e83c-45a5-9f21-6bf5e0572b9a","Type":"ContainerStarted","Data":"eb407860ca2a5645cde81d716d4d920e2221ea58587bba6f827e07570da317fc"} Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.354481 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.355566 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.355578 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw" event={"ID":"54dca2d8-8976-4d24-b97a-a9e867d0d74b","Type":"ContainerDied","Data":"ea4952d722b0b1d2af214af868891d51f758da6e24774304d327816049486c0f"} Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.355618 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea4952d722b0b1d2af214af868891d51f758da6e24774304d327816049486c0f" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.357606 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjqh5" event={"ID":"394f2d01-bb7b-49b6-95f3-5430b4987766","Type":"ContainerStarted","Data":"c3cce0371c2c2dad56a4f4de9bf721aed26633d3dfc2cb825f7738f7f48f21f9"} Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.360119 4775 generic.go:334] "Generic (PLEG): container finished" podID="25f6b7d2-1661-4d49-8648-2f665206c2e9" containerID="39ce28b2856c8d25388324d2f1ac4f71c7df48dfb702eecfaef99dcb155fc7de" exitCode=0 Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.360270 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsmsk" event={"ID":"25f6b7d2-1661-4d49-8648-2f665206c2e9","Type":"ContainerDied","Data":"39ce28b2856c8d25388324d2f1ac4f71c7df48dfb702eecfaef99dcb155fc7de"} Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.360311 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsmsk" event={"ID":"25f6b7d2-1661-4d49-8648-2f665206c2e9","Type":"ContainerStarted","Data":"c0c50cd4e9fbaf98c08713dfe9e13ce05f9dd6a1d630bad16995491d735ba540"} Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.366139 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" event={"ID":"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e","Type":"ContainerStarted","Data":"3871289941d035e7a7d6d2d7fdf95c00f30f0df6c706d478c36afe797c1636ae"} Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.366187 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" event={"ID":"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e","Type":"ContainerStarted","Data":"bc852cdebed05722143dfb25b1a2d01fe742487a7a5669b0a3da1504c5016669"} Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.366361 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.368436 4775 generic.go:334] "Generic (PLEG): container finished" podID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" containerID="490107346b8f8cfc6023549cd3b5342480096f52887e32f304d5ea981fa1ffb9" exitCode=0 Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.369386 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kd5c" event={"ID":"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9","Type":"ContainerDied","Data":"490107346b8f8cfc6023549cd3b5342480096f52887e32f304d5ea981fa1ffb9"} Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.369404 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kd5c" event={"ID":"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9","Type":"ContainerStarted","Data":"281e5930a5f1e84b5f8de3cf031a060ddf2e8595edc4b4cc48783dc79663eb50"} Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.450173 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" podStartSLOduration=131.450147773 podStartE2EDuration="2m11.450147773s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:10.45006262 +0000 UTC m=+152.366424996" watchObservedRunningTime="2025-11-25 19:36:10.450147773 +0000 UTC m=+152.366510139" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.470283 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:10 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:10 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:10 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.470363 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.539819 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.539925 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.547292 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.615177 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.615512 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.617465 4775 patch_prober.go:28] interesting pod/console-f9d7485db-2c2hp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.617549 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2c2hp" podUID="f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.680628 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-brl6t"] Nov 25 19:36:10 crc kubenswrapper[4775]: E1125 19:36:10.680929 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54dca2d8-8976-4d24-b97a-a9e867d0d74b" containerName="collect-profiles" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.680947 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="54dca2d8-8976-4d24-b97a-a9e867d0d74b" containerName="collect-profiles" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.681100 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="54dca2d8-8976-4d24-b97a-a9e867d0d74b" containerName="collect-profiles" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.683266 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.685788 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.689116 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.690252 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.694039 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.694470 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.706609 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.711430 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brl6t"] Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.807118 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80307464-600f-475d-a4ed-23d3aaf98297-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"80307464-600f-475d-a4ed-23d3aaf98297\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.807407 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-catalog-content\") pod \"redhat-marketplace-brl6t\" (UID: \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\") " pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.807437 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-utilities\") pod \"redhat-marketplace-brl6t\" (UID: \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\") " pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.807460 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p86hz\" (UniqueName: \"kubernetes.io/projected/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-kube-api-access-p86hz\") pod \"redhat-marketplace-brl6t\" (UID: \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\") " pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.808228 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80307464-600f-475d-a4ed-23d3aaf98297-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"80307464-600f-475d-a4ed-23d3aaf98297\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.854758 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.910959 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80307464-600f-475d-a4ed-23d3aaf98297-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"80307464-600f-475d-a4ed-23d3aaf98297\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.911010 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-catalog-content\") pod \"redhat-marketplace-brl6t\" (UID: \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\") " pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.911043 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-utilities\") pod \"redhat-marketplace-brl6t\" (UID: \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\") " pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.911068 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p86hz\" (UniqueName: \"kubernetes.io/projected/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-kube-api-access-p86hz\") pod \"redhat-marketplace-brl6t\" (UID: \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\") " pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.911109 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80307464-600f-475d-a4ed-23d3aaf98297-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"80307464-600f-475d-a4ed-23d3aaf98297\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.911156 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80307464-600f-475d-a4ed-23d3aaf98297-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"80307464-600f-475d-a4ed-23d3aaf98297\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.911865 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-utilities\") pod \"redhat-marketplace-brl6t\" (UID: \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\") " pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.911930 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-catalog-content\") pod \"redhat-marketplace-brl6t\" (UID: \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\") " pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.934684 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80307464-600f-475d-a4ed-23d3aaf98297-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"80307464-600f-475d-a4ed-23d3aaf98297\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 19:36:10 crc kubenswrapper[4775]: I1125 19:36:10.937321 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p86hz\" (UniqueName: \"kubernetes.io/projected/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-kube-api-access-p86hz\") pod \"redhat-marketplace-brl6t\" (UID: \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\") " pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.009836 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.028844 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.070726 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.070797 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.090153 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p98dh"] Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.091466 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.100595 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p98dh"] Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.113582 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-catalog-content\") pod \"redhat-marketplace-p98dh\" (UID: \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\") " pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.113698 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm2p7\" (UniqueName: \"kubernetes.io/projected/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-kube-api-access-vm2p7\") pod \"redhat-marketplace-p98dh\" (UID: \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\") " pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.113782 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-utilities\") pod \"redhat-marketplace-p98dh\" (UID: \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\") " pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.214503 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-catalog-content\") pod \"redhat-marketplace-p98dh\" (UID: \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\") " pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.214995 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm2p7\" (UniqueName: \"kubernetes.io/projected/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-kube-api-access-vm2p7\") pod \"redhat-marketplace-p98dh\" (UID: \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\") " pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.215063 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-utilities\") pod \"redhat-marketplace-p98dh\" (UID: \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\") " pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.215485 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-utilities\") pod \"redhat-marketplace-p98dh\" (UID: \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\") " pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.215961 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-catalog-content\") pod \"redhat-marketplace-p98dh\" (UID: \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\") " pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.246569 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm2p7\" (UniqueName: \"kubernetes.io/projected/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-kube-api-access-vm2p7\") pod \"redhat-marketplace-p98dh\" (UID: \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\") " pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.298817 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brl6t"] Nov 25 19:36:11 crc kubenswrapper[4775]: W1125 19:36:11.314108 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8ac5d6_cdb8_4bf0_8c8c_1970864a85d1.slice/crio-6dcbf5e63909790b333a4d9980207459387e61df22e76d36f580523ff41803a7 WatchSource:0}: Error finding container 6dcbf5e63909790b333a4d9980207459387e61df22e76d36f580523ff41803a7: Status 404 returned error can't find the container with id 6dcbf5e63909790b333a4d9980207459387e61df22e76d36f580523ff41803a7 Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.379161 4775 generic.go:334] "Generic (PLEG): container finished" podID="394f2d01-bb7b-49b6-95f3-5430b4987766" containerID="b57e1f999620c8e3dcd2093b3015446c1eb51dc0db95df656b4bded16f2f022e" exitCode=0 Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.379265 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjqh5" event={"ID":"394f2d01-bb7b-49b6-95f3-5430b4987766","Type":"ContainerDied","Data":"b57e1f999620c8e3dcd2093b3015446c1eb51dc0db95df656b4bded16f2f022e"} Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.408221 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brl6t" event={"ID":"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1","Type":"ContainerStarted","Data":"6dcbf5e63909790b333a4d9980207459387e61df22e76d36f580523ff41803a7"} Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.420210 4775 generic.go:334] "Generic (PLEG): container finished" podID="7414afc8-2fa8-45ab-8f5e-4898caf58072" containerID="b49024dab53b5df67706e9e3b5b24b4261f6ef053d83952fd6b2b88175d3fd35" exitCode=0 Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.421372 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7414afc8-2fa8-45ab-8f5e-4898caf58072","Type":"ContainerDied","Data":"b49024dab53b5df67706e9e3b5b24b4261f6ef053d83952fd6b2b88175d3fd35"} Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.440422 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4swd2" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.469274 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.474916 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:11 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:11 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:11 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.475504 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.493729 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.601640 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 19:36:11 crc kubenswrapper[4775]: W1125 19:36:11.642777 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod80307464_600f_475d_a4ed_23d3aaf98297.slice/crio-7eabd138467c2f4ccb4a42ffa887e6825d479d299f7b3e5c7a254dfc23f10e0d WatchSource:0}: Error finding container 7eabd138467c2f4ccb4a42ffa887e6825d479d299f7b3e5c7a254dfc23f10e0d: Status 404 returned error can't find the container with id 7eabd138467c2f4ccb4a42ffa887e6825d479d299f7b3e5c7a254dfc23f10e0d Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.695787 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.695846 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.714097 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.752584 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-7h68s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.752673 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7h68s" podUID="7959f454-8db6-4c44-9d44-9b3b2862935f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.752820 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-7h68s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.752847 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7h68s" podUID="7959f454-8db6-4c44-9d44-9b3b2862935f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.808717 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p98dh"] Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.882776 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w2698"] Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.884467 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.887961 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 19:36:11 crc kubenswrapper[4775]: I1125 19:36:11.899150 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2698"] Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.029256 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45b3f08-fc2c-46cc-b48d-edfc0183c332-utilities\") pod \"redhat-operators-w2698\" (UID: \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\") " pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.029318 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45b3f08-fc2c-46cc-b48d-edfc0183c332-catalog-content\") pod \"redhat-operators-w2698\" (UID: \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\") " pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.029657 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qzbb\" (UniqueName: \"kubernetes.io/projected/b45b3f08-fc2c-46cc-b48d-edfc0183c332-kube-api-access-5qzbb\") pod \"redhat-operators-w2698\" (UID: \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\") " pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.133591 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45b3f08-fc2c-46cc-b48d-edfc0183c332-utilities\") pod \"redhat-operators-w2698\" (UID: \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\") " pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.133709 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45b3f08-fc2c-46cc-b48d-edfc0183c332-catalog-content\") pod \"redhat-operators-w2698\" (UID: \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\") " pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.133771 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qzbb\" (UniqueName: \"kubernetes.io/projected/b45b3f08-fc2c-46cc-b48d-edfc0183c332-kube-api-access-5qzbb\") pod \"redhat-operators-w2698\" (UID: \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\") " pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.135054 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45b3f08-fc2c-46cc-b48d-edfc0183c332-utilities\") pod \"redhat-operators-w2698\" (UID: \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\") " pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.135290 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45b3f08-fc2c-46cc-b48d-edfc0183c332-catalog-content\") pod \"redhat-operators-w2698\" (UID: \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\") " pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.152132 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qzbb\" (UniqueName: \"kubernetes.io/projected/b45b3f08-fc2c-46cc-b48d-edfc0183c332-kube-api-access-5qzbb\") pod \"redhat-operators-w2698\" (UID: \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\") " pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.206158 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.295261 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4x5nk"] Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.296403 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.305081 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4x5nk"] Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.435921 4775 generic.go:334] "Generic (PLEG): container finished" podID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" containerID="cf7620c6384ccf94048acf68f9a1d26fa6a64da3fe29514f730ab700b8017661" exitCode=0 Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.436280 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brl6t" event={"ID":"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1","Type":"ContainerDied","Data":"cf7620c6384ccf94048acf68f9a1d26fa6a64da3fe29514f730ab700b8017661"} Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.437575 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5bac86-7638-49bd-b896-444ff16bc88c-utilities\") pod \"redhat-operators-4x5nk\" (UID: \"0e5bac86-7638-49bd-b896-444ff16bc88c\") " pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.437638 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5bac86-7638-49bd-b896-444ff16bc88c-catalog-content\") pod \"redhat-operators-4x5nk\" (UID: \"0e5bac86-7638-49bd-b896-444ff16bc88c\") " pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.437898 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2htw\" (UniqueName: \"kubernetes.io/projected/0e5bac86-7638-49bd-b896-444ff16bc88c-kube-api-access-v2htw\") pod \"redhat-operators-4x5nk\" (UID: \"0e5bac86-7638-49bd-b896-444ff16bc88c\") " pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.440284 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"80307464-600f-475d-a4ed-23d3aaf98297","Type":"ContainerStarted","Data":"0dad9c36a45dc667b20e5891cee7a03fd47bf9e2c609131b8309692315454d3a"} Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.440339 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"80307464-600f-475d-a4ed-23d3aaf98297","Type":"ContainerStarted","Data":"7eabd138467c2f4ccb4a42ffa887e6825d479d299f7b3e5c7a254dfc23f10e0d"} Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.443594 4775 generic.go:334] "Generic (PLEG): container finished" podID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" containerID="4cf069afbca3b2929b35c49da0fd8ca7a1649bc5c4ab71087fe8ccb0f8dcbc41" exitCode=0 Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.443679 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p98dh" event={"ID":"958d3bd1-ce50-413a-803b-3e2bc2e6ba69","Type":"ContainerDied","Data":"4cf069afbca3b2929b35c49da0fd8ca7a1649bc5c4ab71087fe8ccb0f8dcbc41"} Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.443706 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p98dh" event={"ID":"958d3bd1-ce50-413a-803b-3e2bc2e6ba69","Type":"ContainerStarted","Data":"794cc2e962953c568dc86792f6048d002a8c4b991e7e98e17f58f70def2d6953"} Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.449419 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-zzjj4" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.476565 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:12 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:12 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:12 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.476718 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.542513 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5bac86-7638-49bd-b896-444ff16bc88c-utilities\") pod \"redhat-operators-4x5nk\" (UID: \"0e5bac86-7638-49bd-b896-444ff16bc88c\") " pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.543068 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5bac86-7638-49bd-b896-444ff16bc88c-catalog-content\") pod \"redhat-operators-4x5nk\" (UID: \"0e5bac86-7638-49bd-b896-444ff16bc88c\") " pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.543109 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2htw\" (UniqueName: \"kubernetes.io/projected/0e5bac86-7638-49bd-b896-444ff16bc88c-kube-api-access-v2htw\") pod \"redhat-operators-4x5nk\" (UID: \"0e5bac86-7638-49bd-b896-444ff16bc88c\") " pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.548925 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5bac86-7638-49bd-b896-444ff16bc88c-catalog-content\") pod \"redhat-operators-4x5nk\" (UID: \"0e5bac86-7638-49bd-b896-444ff16bc88c\") " pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.549378 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5bac86-7638-49bd-b896-444ff16bc88c-utilities\") pod \"redhat-operators-4x5nk\" (UID: \"0e5bac86-7638-49bd-b896-444ff16bc88c\") " pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.586231 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.586203894 podStartE2EDuration="2.586203894s" podCreationTimestamp="2025-11-25 19:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:12.584948649 +0000 UTC m=+154.501311015" watchObservedRunningTime="2025-11-25 19:36:12.586203894 +0000 UTC m=+154.502566260" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.587888 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2htw\" (UniqueName: \"kubernetes.io/projected/0e5bac86-7638-49bd-b896-444ff16bc88c-kube-api-access-v2htw\") pod \"redhat-operators-4x5nk\" (UID: \"0e5bac86-7638-49bd-b896-444ff16bc88c\") " pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.645762 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.777481 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2698"] Nov 25 19:36:12 crc kubenswrapper[4775]: W1125 19:36:12.792242 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb45b3f08_fc2c_46cc_b48d_edfc0183c332.slice/crio-c12f63677395509853500f5d355fbe9882697fb853a4c2f618df971ac70b312e WatchSource:0}: Error finding container c12f63677395509853500f5d355fbe9882697fb853a4c2f618df971ac70b312e: Status 404 returned error can't find the container with id c12f63677395509853500f5d355fbe9882697fb853a4c2f618df971ac70b312e Nov 25 19:36:12 crc kubenswrapper[4775]: I1125 19:36:12.909364 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.053849 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7414afc8-2fa8-45ab-8f5e-4898caf58072-kube-api-access\") pod \"7414afc8-2fa8-45ab-8f5e-4898caf58072\" (UID: \"7414afc8-2fa8-45ab-8f5e-4898caf58072\") " Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.057586 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7414afc8-2fa8-45ab-8f5e-4898caf58072-kubelet-dir\") pod \"7414afc8-2fa8-45ab-8f5e-4898caf58072\" (UID: \"7414afc8-2fa8-45ab-8f5e-4898caf58072\") " Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.057796 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7414afc8-2fa8-45ab-8f5e-4898caf58072-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7414afc8-2fa8-45ab-8f5e-4898caf58072" (UID: "7414afc8-2fa8-45ab-8f5e-4898caf58072"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.058232 4775 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7414afc8-2fa8-45ab-8f5e-4898caf58072-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.120226 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7414afc8-2fa8-45ab-8f5e-4898caf58072-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7414afc8-2fa8-45ab-8f5e-4898caf58072" (UID: "7414afc8-2fa8-45ab-8f5e-4898caf58072"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.159140 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7414afc8-2fa8-45ab-8f5e-4898caf58072-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.163796 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4x5nk"] Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.474055 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:13 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:13 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:13 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.475318 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.515335 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.515480 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7414afc8-2fa8-45ab-8f5e-4898caf58072","Type":"ContainerDied","Data":"9d06c3fffaafc54b1e0aa4e30e1893cc5273b0a74b1992be4f3f7128e0a2551e"} Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.515691 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d06c3fffaafc54b1e0aa4e30e1893cc5273b0a74b1992be4f3f7128e0a2551e" Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.530782 4775 generic.go:334] "Generic (PLEG): container finished" podID="80307464-600f-475d-a4ed-23d3aaf98297" containerID="0dad9c36a45dc667b20e5891cee7a03fd47bf9e2c609131b8309692315454d3a" exitCode=0 Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.530861 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"80307464-600f-475d-a4ed-23d3aaf98297","Type":"ContainerDied","Data":"0dad9c36a45dc667b20e5891cee7a03fd47bf9e2c609131b8309692315454d3a"} Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.536887 4775 generic.go:334] "Generic (PLEG): container finished" podID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" containerID="416eef73c9a3136a2b9cc11f59d36006f90ca1a7f44c760fc12a07a6dd9b27fc" exitCode=0 Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.537000 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2698" event={"ID":"b45b3f08-fc2c-46cc-b48d-edfc0183c332","Type":"ContainerDied","Data":"416eef73c9a3136a2b9cc11f59d36006f90ca1a7f44c760fc12a07a6dd9b27fc"} Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.537066 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2698" event={"ID":"b45b3f08-fc2c-46cc-b48d-edfc0183c332","Type":"ContainerStarted","Data":"c12f63677395509853500f5d355fbe9882697fb853a4c2f618df971ac70b312e"} Nov 25 19:36:13 crc kubenswrapper[4775]: I1125 19:36:13.544265 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x5nk" event={"ID":"0e5bac86-7638-49bd-b896-444ff16bc88c","Type":"ContainerStarted","Data":"4dcb62a2fac6b2e5aee6021c2e99e4b90e789966ba79f5a09971a4035fe5d0bf"} Nov 25 19:36:14 crc kubenswrapper[4775]: I1125 19:36:14.470894 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:14 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:14 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:14 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:14 crc kubenswrapper[4775]: I1125 19:36:14.470968 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:14 crc kubenswrapper[4775]: I1125 19:36:14.578053 4775 generic.go:334] "Generic (PLEG): container finished" podID="0e5bac86-7638-49bd-b896-444ff16bc88c" containerID="fb94ff879f4a65f332b9620d7048cc871b5bef63a7f5e58e68f3a25faf46b4b9" exitCode=0 Nov 25 19:36:14 crc kubenswrapper[4775]: I1125 19:36:14.578169 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x5nk" event={"ID":"0e5bac86-7638-49bd-b896-444ff16bc88c","Type":"ContainerDied","Data":"fb94ff879f4a65f332b9620d7048cc871b5bef63a7f5e58e68f3a25faf46b4b9"} Nov 25 19:36:14 crc kubenswrapper[4775]: I1125 19:36:14.933881 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 19:36:15 crc kubenswrapper[4775]: I1125 19:36:15.009632 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80307464-600f-475d-a4ed-23d3aaf98297-kubelet-dir\") pod \"80307464-600f-475d-a4ed-23d3aaf98297\" (UID: \"80307464-600f-475d-a4ed-23d3aaf98297\") " Nov 25 19:36:15 crc kubenswrapper[4775]: I1125 19:36:15.009730 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80307464-600f-475d-a4ed-23d3aaf98297-kube-api-access\") pod \"80307464-600f-475d-a4ed-23d3aaf98297\" (UID: \"80307464-600f-475d-a4ed-23d3aaf98297\") " Nov 25 19:36:15 crc kubenswrapper[4775]: I1125 19:36:15.009755 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80307464-600f-475d-a4ed-23d3aaf98297-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "80307464-600f-475d-a4ed-23d3aaf98297" (UID: "80307464-600f-475d-a4ed-23d3aaf98297"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:36:15 crc kubenswrapper[4775]: I1125 19:36:15.010120 4775 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80307464-600f-475d-a4ed-23d3aaf98297-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 19:36:15 crc kubenswrapper[4775]: I1125 19:36:15.035871 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80307464-600f-475d-a4ed-23d3aaf98297-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "80307464-600f-475d-a4ed-23d3aaf98297" (UID: "80307464-600f-475d-a4ed-23d3aaf98297"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:36:15 crc kubenswrapper[4775]: I1125 19:36:15.115572 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80307464-600f-475d-a4ed-23d3aaf98297-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 19:36:15 crc kubenswrapper[4775]: I1125 19:36:15.471633 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:15 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:15 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:15 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:15 crc kubenswrapper[4775]: I1125 19:36:15.471883 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:15 crc kubenswrapper[4775]: I1125 19:36:15.590425 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"80307464-600f-475d-a4ed-23d3aaf98297","Type":"ContainerDied","Data":"7eabd138467c2f4ccb4a42ffa887e6825d479d299f7b3e5c7a254dfc23f10e0d"} Nov 25 19:36:15 crc kubenswrapper[4775]: I1125 19:36:15.590473 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eabd138467c2f4ccb4a42ffa887e6825d479d299f7b3e5c7a254dfc23f10e0d" Nov 25 19:36:15 crc kubenswrapper[4775]: I1125 19:36:15.590496 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 19:36:16 crc kubenswrapper[4775]: I1125 19:36:16.469506 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:16 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:16 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:16 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:16 crc kubenswrapper[4775]: I1125 19:36:16.469996 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:16 crc kubenswrapper[4775]: I1125 19:36:16.933461 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sl6bq" Nov 25 19:36:17 crc kubenswrapper[4775]: I1125 19:36:17.471339 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:17 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:17 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:17 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:17 crc kubenswrapper[4775]: I1125 19:36:17.471526 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:18 crc kubenswrapper[4775]: I1125 19:36:18.469356 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:18 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:18 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:18 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:18 crc kubenswrapper[4775]: I1125 19:36:18.469464 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:19 crc kubenswrapper[4775]: I1125 19:36:19.470510 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:19 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:19 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:19 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:19 crc kubenswrapper[4775]: I1125 19:36:19.470579 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:20 crc kubenswrapper[4775]: I1125 19:36:20.470721 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:20 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:20 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:20 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:20 crc kubenswrapper[4775]: I1125 19:36:20.471306 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:20 crc kubenswrapper[4775]: I1125 19:36:20.615123 4775 patch_prober.go:28] interesting pod/console-f9d7485db-2c2hp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 25 19:36:20 crc kubenswrapper[4775]: I1125 19:36:20.615233 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2c2hp" podUID="f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 25 19:36:21 crc kubenswrapper[4775]: I1125 19:36:21.469473 4775 patch_prober.go:28] interesting pod/router-default-5444994796-vxrkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 19:36:21 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Nov 25 19:36:21 crc kubenswrapper[4775]: [+]process-running ok Nov 25 19:36:21 crc kubenswrapper[4775]: healthz check failed Nov 25 19:36:21 crc kubenswrapper[4775]: I1125 19:36:21.469581 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vxrkp" podUID="79cf629d-9f55-42f4-b5fa-58532bc6d191" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 19:36:21 crc kubenswrapper[4775]: I1125 19:36:21.752866 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-7h68s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 19:36:21 crc kubenswrapper[4775]: I1125 19:36:21.752953 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7h68s" podUID="7959f454-8db6-4c44-9d44-9b3b2862935f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 19:36:21 crc kubenswrapper[4775]: I1125 19:36:21.753012 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-7h68s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 19:36:21 crc kubenswrapper[4775]: I1125 19:36:21.753202 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7h68s" podUID="7959f454-8db6-4c44-9d44-9b3b2862935f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 19:36:22 crc kubenswrapper[4775]: I1125 19:36:22.470660 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:22 crc kubenswrapper[4775]: I1125 19:36:22.475028 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-vxrkp" Nov 25 19:36:22 crc kubenswrapper[4775]: I1125 19:36:22.487272 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:36:22 crc kubenswrapper[4775]: I1125 19:36:22.501750 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f-metrics-certs\") pod \"network-metrics-daemon-69dvc\" (UID: \"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f\") " pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:36:22 crc kubenswrapper[4775]: I1125 19:36:22.570372 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69dvc" Nov 25 19:36:26 crc kubenswrapper[4775]: I1125 19:36:26.022801 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-69dvc"] Nov 25 19:36:26 crc kubenswrapper[4775]: I1125 19:36:26.701217 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-69dvc" event={"ID":"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f","Type":"ContainerStarted","Data":"94a24075d0ce000c22e86a4676549df9322262f7857f1a3ea41ccffe26ae51dd"} Nov 25 19:36:26 crc kubenswrapper[4775]: I1125 19:36:26.701296 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-69dvc" event={"ID":"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f","Type":"ContainerStarted","Data":"183311302dcf0c6a0e6475395e28ad53d322f1559fc098adb0b9765d6713655d"} Nov 25 19:36:28 crc kubenswrapper[4775]: I1125 19:36:28.717404 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-69dvc" event={"ID":"f5e3c7b3-6b70-49ab-a70a-58ba65f1b40f","Type":"ContainerStarted","Data":"2e850708d57c0f74ead5914d411a7469f99c72b3dc2dd42808dec2d20a413051"} Nov 25 19:36:29 crc kubenswrapper[4775]: I1125 19:36:29.224629 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:36:29 crc kubenswrapper[4775]: I1125 19:36:29.749328 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-69dvc" podStartSLOduration=150.749298364 podStartE2EDuration="2m30.749298364s" podCreationTimestamp="2025-11-25 19:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:29.74158134 +0000 UTC m=+171.657943706" watchObservedRunningTime="2025-11-25 19:36:29.749298364 +0000 UTC m=+171.665660730" Nov 25 19:36:30 crc kubenswrapper[4775]: I1125 19:36:30.620837 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:36:30 crc kubenswrapper[4775]: I1125 19:36:30.626874 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:36:31 crc kubenswrapper[4775]: I1125 19:36:31.769523 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-7h68s" Nov 25 19:36:41 crc kubenswrapper[4775]: I1125 19:36:41.070409 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:36:41 crc kubenswrapper[4775]: I1125 19:36:41.071542 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:36:41 crc kubenswrapper[4775]: I1125 19:36:41.907370 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ph5jb" Nov 25 19:36:44 crc kubenswrapper[4775]: E1125 19:36:44.336886 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 19:36:44 crc kubenswrapper[4775]: E1125 19:36:44.337232 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89nks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-7kd5c_openshift-marketplace(6fe6fc97-e05b-454e-84e9-b011e4c2d8b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 19:36:44 crc kubenswrapper[4775]: E1125 19:36:44.338497 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-7kd5c" podUID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" Nov 25 19:36:47 crc kubenswrapper[4775]: I1125 19:36:47.290603 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 19:36:47 crc kubenswrapper[4775]: E1125 19:36:47.929202 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-7kd5c" podUID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" Nov 25 19:36:48 crc kubenswrapper[4775]: E1125 19:36:48.490922 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 19:36:48 crc kubenswrapper[4775]: E1125 19:36:48.491163 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vm2p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p98dh_openshift-marketplace(958d3bd1-ce50-413a-803b-3e2bc2e6ba69): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 19:36:48 crc kubenswrapper[4775]: E1125 19:36:48.492800 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-p98dh" podUID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" Nov 25 19:36:49 crc kubenswrapper[4775]: E1125 19:36:49.967614 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p98dh" podUID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" Nov 25 19:36:50 crc kubenswrapper[4775]: E1125 19:36:50.078833 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 19:36:50 crc kubenswrapper[4775]: E1125 19:36:50.079041 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwsxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jsmsk_openshift-marketplace(25f6b7d2-1661-4d49-8648-2f665206c2e9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 19:36:50 crc kubenswrapper[4775]: E1125 19:36:50.081124 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-jsmsk" podUID="25f6b7d2-1661-4d49-8648-2f665206c2e9" Nov 25 19:36:50 crc kubenswrapper[4775]: E1125 19:36:50.083829 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 19:36:50 crc kubenswrapper[4775]: E1125 19:36:50.084061 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9szv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rjqh5_openshift-marketplace(394f2d01-bb7b-49b6-95f3-5430b4987766): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 19:36:50 crc kubenswrapper[4775]: E1125 19:36:50.085964 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rjqh5" podUID="394f2d01-bb7b-49b6-95f3-5430b4987766" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.271025 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 19:36:53 crc kubenswrapper[4775]: E1125 19:36:53.272121 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7414afc8-2fa8-45ab-8f5e-4898caf58072" containerName="pruner" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.272137 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7414afc8-2fa8-45ab-8f5e-4898caf58072" containerName="pruner" Nov 25 19:36:53 crc kubenswrapper[4775]: E1125 19:36:53.272151 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80307464-600f-475d-a4ed-23d3aaf98297" containerName="pruner" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.272158 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="80307464-600f-475d-a4ed-23d3aaf98297" containerName="pruner" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.272296 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="80307464-600f-475d-a4ed-23d3aaf98297" containerName="pruner" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.272310 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7414afc8-2fa8-45ab-8f5e-4898caf58072" containerName="pruner" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.277460 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.280385 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.280799 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.282897 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.420242 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c35c26d2-5d91-473a-ad11-9c2384ee86a8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c35c26d2-5d91-473a-ad11-9c2384ee86a8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.420322 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c35c26d2-5d91-473a-ad11-9c2384ee86a8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c35c26d2-5d91-473a-ad11-9c2384ee86a8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.521977 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c35c26d2-5d91-473a-ad11-9c2384ee86a8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c35c26d2-5d91-473a-ad11-9c2384ee86a8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.522043 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c35c26d2-5d91-473a-ad11-9c2384ee86a8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c35c26d2-5d91-473a-ad11-9c2384ee86a8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.522135 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c35c26d2-5d91-473a-ad11-9c2384ee86a8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c35c26d2-5d91-473a-ad11-9c2384ee86a8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.544225 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c35c26d2-5d91-473a-ad11-9c2384ee86a8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c35c26d2-5d91-473a-ad11-9c2384ee86a8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 19:36:53 crc kubenswrapper[4775]: I1125 19:36:53.594494 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 19:36:53 crc kubenswrapper[4775]: E1125 19:36:53.918258 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rjqh5" podUID="394f2d01-bb7b-49b6-95f3-5430b4987766" Nov 25 19:36:53 crc kubenswrapper[4775]: E1125 19:36:53.918956 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jsmsk" podUID="25f6b7d2-1661-4d49-8648-2f665206c2e9" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.034743 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.035209 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qzbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-w2698_openshift-marketplace(b45b3f08-fc2c-46cc-b48d-edfc0183c332): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.037685 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-w2698" podUID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.062887 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.063094 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v2htw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4x5nk_openshift-marketplace(0e5bac86-7638-49bd-b896-444ff16bc88c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.063279 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.063370 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p86hz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-brl6t_openshift-marketplace(6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.064556 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4x5nk" podUID="0e5bac86-7638-49bd-b896-444ff16bc88c" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.064633 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-brl6t" podUID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.098604 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.098806 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkkjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hnvt5_openshift-marketplace(3516b667-e83c-45a5-9f21-6bf5e0572b9a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.100296 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hnvt5" podUID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" Nov 25 19:36:54 crc kubenswrapper[4775]: I1125 19:36:54.148799 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 19:36:54 crc kubenswrapper[4775]: I1125 19:36:54.913686 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c35c26d2-5d91-473a-ad11-9c2384ee86a8","Type":"ContainerStarted","Data":"537837ea4d5933c6594a2df3c12c573423d901218370bd0e4d95e1971e6e77f9"} Nov 25 19:36:54 crc kubenswrapper[4775]: I1125 19:36:54.914308 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c35c26d2-5d91-473a-ad11-9c2384ee86a8","Type":"ContainerStarted","Data":"fd0d283a6c35849d3639776ce75f15c35f3d087451c0e6ea19fae50bff446622"} Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.916553 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-brl6t" podUID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.916803 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hnvt5" podUID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.916983 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-w2698" podUID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" Nov 25 19:36:54 crc kubenswrapper[4775]: E1125 19:36:54.917481 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4x5nk" podUID="0e5bac86-7638-49bd-b896-444ff16bc88c" Nov 25 19:36:55 crc kubenswrapper[4775]: I1125 19:36:55.009632 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.00960125 podStartE2EDuration="2.00960125s" podCreationTimestamp="2025-11-25 19:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:55.003045222 +0000 UTC m=+196.919407628" watchObservedRunningTime="2025-11-25 19:36:55.00960125 +0000 UTC m=+196.925963656" Nov 25 19:36:55 crc kubenswrapper[4775]: I1125 19:36:55.922056 4775 generic.go:334] "Generic (PLEG): container finished" podID="c35c26d2-5d91-473a-ad11-9c2384ee86a8" containerID="537837ea4d5933c6594a2df3c12c573423d901218370bd0e4d95e1971e6e77f9" exitCode=0 Nov 25 19:36:55 crc kubenswrapper[4775]: I1125 19:36:55.922162 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c35c26d2-5d91-473a-ad11-9c2384ee86a8","Type":"ContainerDied","Data":"537837ea4d5933c6594a2df3c12c573423d901218370bd0e4d95e1971e6e77f9"} Nov 25 19:36:57 crc kubenswrapper[4775]: I1125 19:36:57.179209 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 19:36:57 crc kubenswrapper[4775]: I1125 19:36:57.283312 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c35c26d2-5d91-473a-ad11-9c2384ee86a8-kube-api-access\") pod \"c35c26d2-5d91-473a-ad11-9c2384ee86a8\" (UID: \"c35c26d2-5d91-473a-ad11-9c2384ee86a8\") " Nov 25 19:36:57 crc kubenswrapper[4775]: I1125 19:36:57.283465 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c35c26d2-5d91-473a-ad11-9c2384ee86a8-kubelet-dir\") pod \"c35c26d2-5d91-473a-ad11-9c2384ee86a8\" (UID: \"c35c26d2-5d91-473a-ad11-9c2384ee86a8\") " Nov 25 19:36:57 crc kubenswrapper[4775]: I1125 19:36:57.283812 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c35c26d2-5d91-473a-ad11-9c2384ee86a8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c35c26d2-5d91-473a-ad11-9c2384ee86a8" (UID: "c35c26d2-5d91-473a-ad11-9c2384ee86a8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:36:57 crc kubenswrapper[4775]: I1125 19:36:57.293557 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c35c26d2-5d91-473a-ad11-9c2384ee86a8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c35c26d2-5d91-473a-ad11-9c2384ee86a8" (UID: "c35c26d2-5d91-473a-ad11-9c2384ee86a8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:36:57 crc kubenswrapper[4775]: I1125 19:36:57.384950 4775 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c35c26d2-5d91-473a-ad11-9c2384ee86a8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 19:36:57 crc kubenswrapper[4775]: I1125 19:36:57.385024 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c35c26d2-5d91-473a-ad11-9c2384ee86a8-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 19:36:57 crc kubenswrapper[4775]: I1125 19:36:57.935684 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c35c26d2-5d91-473a-ad11-9c2384ee86a8","Type":"ContainerDied","Data":"fd0d283a6c35849d3639776ce75f15c35f3d087451c0e6ea19fae50bff446622"} Nov 25 19:36:57 crc kubenswrapper[4775]: I1125 19:36:57.935729 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 19:36:57 crc kubenswrapper[4775]: I1125 19:36:57.935745 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0d283a6c35849d3639776ce75f15c35f3d087451c0e6ea19fae50bff446622" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.268067 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 19:36:58 crc kubenswrapper[4775]: E1125 19:36:58.268512 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35c26d2-5d91-473a-ad11-9c2384ee86a8" containerName="pruner" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.268542 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35c26d2-5d91-473a-ad11-9c2384ee86a8" containerName="pruner" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.268867 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c35c26d2-5d91-473a-ad11-9c2384ee86a8" containerName="pruner" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.269706 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.279129 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.279151 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.283091 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.399425 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/103567fc-c7f2-4a0a-ba9b-674148cbda9f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.399490 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/103567fc-c7f2-4a0a-ba9b-674148cbda9f-kube-api-access\") pod \"installer-9-crc\" (UID: \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.399513 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/103567fc-c7f2-4a0a-ba9b-674148cbda9f-var-lock\") pod \"installer-9-crc\" (UID: \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.500368 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/103567fc-c7f2-4a0a-ba9b-674148cbda9f-kube-api-access\") pod \"installer-9-crc\" (UID: \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.500423 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/103567fc-c7f2-4a0a-ba9b-674148cbda9f-var-lock\") pod \"installer-9-crc\" (UID: \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.500489 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/103567fc-c7f2-4a0a-ba9b-674148cbda9f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.500565 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/103567fc-c7f2-4a0a-ba9b-674148cbda9f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.501006 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/103567fc-c7f2-4a0a-ba9b-674148cbda9f-var-lock\") pod \"installer-9-crc\" (UID: \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.518954 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/103567fc-c7f2-4a0a-ba9b-674148cbda9f-kube-api-access\") pod \"installer-9-crc\" (UID: \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 19:36:58 crc kubenswrapper[4775]: I1125 19:36:58.600421 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 19:36:59 crc kubenswrapper[4775]: I1125 19:36:59.052269 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 19:36:59 crc kubenswrapper[4775]: W1125 19:36:59.063834 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod103567fc_c7f2_4a0a_ba9b_674148cbda9f.slice/crio-2e184b4f10acc0e0d3e2ef011a6f25b5b42eba10af85b099afdba1586ee56e45 WatchSource:0}: Error finding container 2e184b4f10acc0e0d3e2ef011a6f25b5b42eba10af85b099afdba1586ee56e45: Status 404 returned error can't find the container with id 2e184b4f10acc0e0d3e2ef011a6f25b5b42eba10af85b099afdba1586ee56e45 Nov 25 19:36:59 crc kubenswrapper[4775]: I1125 19:36:59.924160 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8dhhr"] Nov 25 19:36:59 crc kubenswrapper[4775]: I1125 19:36:59.957600 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"103567fc-c7f2-4a0a-ba9b-674148cbda9f","Type":"ContainerStarted","Data":"a972dc879cb430ecf27bf8eb4961fc7666a49ef307f156d3804f3f489f37fc9e"} Nov 25 19:36:59 crc kubenswrapper[4775]: I1125 19:36:59.957673 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"103567fc-c7f2-4a0a-ba9b-674148cbda9f","Type":"ContainerStarted","Data":"2e184b4f10acc0e0d3e2ef011a6f25b5b42eba10af85b099afdba1586ee56e45"} Nov 25 19:36:59 crc kubenswrapper[4775]: I1125 19:36:59.987588 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.987564582 podStartE2EDuration="1.987564582s" podCreationTimestamp="2025-11-25 19:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:36:59.986960495 +0000 UTC m=+201.903322861" watchObservedRunningTime="2025-11-25 19:36:59.987564582 +0000 UTC m=+201.903926948" Nov 25 19:37:00 crc kubenswrapper[4775]: I1125 19:37:00.966469 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kd5c" event={"ID":"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9","Type":"ContainerStarted","Data":"23517e2701499712ca3519146c94e32eeec3839c7755ac52a6c97490450ed308"} Nov 25 19:37:01 crc kubenswrapper[4775]: I1125 19:37:01.975569 4775 generic.go:334] "Generic (PLEG): container finished" podID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" containerID="23517e2701499712ca3519146c94e32eeec3839c7755ac52a6c97490450ed308" exitCode=0 Nov 25 19:37:01 crc kubenswrapper[4775]: I1125 19:37:01.975642 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kd5c" event={"ID":"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9","Type":"ContainerDied","Data":"23517e2701499712ca3519146c94e32eeec3839c7755ac52a6c97490450ed308"} Nov 25 19:37:02 crc kubenswrapper[4775]: I1125 19:37:02.985855 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kd5c" event={"ID":"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9","Type":"ContainerStarted","Data":"269cce66bad05f6a13b5fcc56c039811094ffa6667aa6a916cf9e5f0aae2a584"} Nov 25 19:37:03 crc kubenswrapper[4775]: I1125 19:37:03.011237 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7kd5c" podStartSLOduration=1.982662605 podStartE2EDuration="54.011207489s" podCreationTimestamp="2025-11-25 19:36:09 +0000 UTC" firstStartedPulling="2025-11-25 19:36:10.369807196 +0000 UTC m=+152.286169562" lastFinishedPulling="2025-11-25 19:37:02.39835208 +0000 UTC m=+204.314714446" observedRunningTime="2025-11-25 19:37:03.009100009 +0000 UTC m=+204.925462405" watchObservedRunningTime="2025-11-25 19:37:03.011207489 +0000 UTC m=+204.927569885" Nov 25 19:37:07 crc kubenswrapper[4775]: I1125 19:37:07.011244 4775 generic.go:334] "Generic (PLEG): container finished" podID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" containerID="6fb74f15a9dd517488ef94d9d92789c96315b3e773c894423ed40866ddd2d139" exitCode=0 Nov 25 19:37:07 crc kubenswrapper[4775]: I1125 19:37:07.011378 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p98dh" event={"ID":"958d3bd1-ce50-413a-803b-3e2bc2e6ba69","Type":"ContainerDied","Data":"6fb74f15a9dd517488ef94d9d92789c96315b3e773c894423ed40866ddd2d139"} Nov 25 19:37:07 crc kubenswrapper[4775]: I1125 19:37:07.016272 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsmsk" event={"ID":"25f6b7d2-1661-4d49-8648-2f665206c2e9","Type":"ContainerStarted","Data":"a4641ebaebfc267cb3e684c417fea96db0d5482d1b3ae30bce8473578211171b"} Nov 25 19:37:08 crc kubenswrapper[4775]: I1125 19:37:08.022481 4775 generic.go:334] "Generic (PLEG): container finished" podID="25f6b7d2-1661-4d49-8648-2f665206c2e9" containerID="a4641ebaebfc267cb3e684c417fea96db0d5482d1b3ae30bce8473578211171b" exitCode=0 Nov 25 19:37:08 crc kubenswrapper[4775]: I1125 19:37:08.022712 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsmsk" event={"ID":"25f6b7d2-1661-4d49-8648-2f665206c2e9","Type":"ContainerDied","Data":"a4641ebaebfc267cb3e684c417fea96db0d5482d1b3ae30bce8473578211171b"} Nov 25 19:37:09 crc kubenswrapper[4775]: I1125 19:37:09.440875 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:37:09 crc kubenswrapper[4775]: I1125 19:37:09.440949 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:37:09 crc kubenswrapper[4775]: I1125 19:37:09.771256 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:37:10 crc kubenswrapper[4775]: I1125 19:37:10.095024 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:37:11 crc kubenswrapper[4775]: E1125 19:37:11.009576 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8ac5d6_cdb8_4bf0_8c8c_1970864a85d1.slice/crio-conmon-ac3f8b23ca9691e4f9bde0baade6d4ffa10dbe4f01fb9c5ddec31216ce16710d.scope\": RecentStats: unable to find data in memory cache]" Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.042465 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x5nk" event={"ID":"0e5bac86-7638-49bd-b896-444ff16bc88c","Type":"ContainerStarted","Data":"719c370a9accc8d084a8de28fcd6e6c2e9808879c2b96c422af5ef20ddd96114"} Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.046506 4775 generic.go:334] "Generic (PLEG): container finished" podID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" containerID="ac3f8b23ca9691e4f9bde0baade6d4ffa10dbe4f01fb9c5ddec31216ce16710d" exitCode=0 Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.046569 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brl6t" event={"ID":"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1","Type":"ContainerDied","Data":"ac3f8b23ca9691e4f9bde0baade6d4ffa10dbe4f01fb9c5ddec31216ce16710d"} Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.048989 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjqh5" event={"ID":"394f2d01-bb7b-49b6-95f3-5430b4987766","Type":"ContainerStarted","Data":"77f58cec8490bcb5509811091708a63006c0d9ca546f0843a861333b39ba765b"} Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.051317 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsmsk" event={"ID":"25f6b7d2-1661-4d49-8648-2f665206c2e9","Type":"ContainerStarted","Data":"3e2185c0afa4f5b972d337b7087a1e08bfb119a23d3fc18b9f8dee8dca2156e1"} Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.054705 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hnvt5" event={"ID":"3516b667-e83c-45a5-9f21-6bf5e0572b9a","Type":"ContainerStarted","Data":"1c0d45d2c4f1938e156d87b399c3737d5cb55e73c2b1c7af70cbdab3a6293cde"} Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.058684 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2698" event={"ID":"b45b3f08-fc2c-46cc-b48d-edfc0183c332","Type":"ContainerStarted","Data":"88854a2016b9e5c5d40bed64daf14652446b291c79fb69b9dc3d16acbe0c2e69"} Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.061086 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p98dh" event={"ID":"958d3bd1-ce50-413a-803b-3e2bc2e6ba69","Type":"ContainerStarted","Data":"f548286d1e7f97124c769aed6fdc5c323dcfedeee3505861b4a76d17b7ec6d0b"} Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.072119 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.072196 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.072254 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.073104 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.073247 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c" gracePeriod=600 Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.097226 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jsmsk" podStartSLOduration=3.629885773 podStartE2EDuration="1m3.097203184s" podCreationTimestamp="2025-11-25 19:36:08 +0000 UTC" firstStartedPulling="2025-11-25 19:36:10.362684873 +0000 UTC m=+152.279047239" lastFinishedPulling="2025-11-25 19:37:09.830002284 +0000 UTC m=+211.746364650" observedRunningTime="2025-11-25 19:37:11.093955082 +0000 UTC m=+213.010317468" watchObservedRunningTime="2025-11-25 19:37:11.097203184 +0000 UTC m=+213.013565550" Nov 25 19:37:11 crc kubenswrapper[4775]: I1125 19:37:11.099561 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7kd5c"] Nov 25 19:37:12 crc kubenswrapper[4775]: I1125 19:37:12.082573 4775 generic.go:334] "Generic (PLEG): container finished" podID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" containerID="1c0d45d2c4f1938e156d87b399c3737d5cb55e73c2b1c7af70cbdab3a6293cde" exitCode=0 Nov 25 19:37:12 crc kubenswrapper[4775]: I1125 19:37:12.082725 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hnvt5" event={"ID":"3516b667-e83c-45a5-9f21-6bf5e0572b9a","Type":"ContainerDied","Data":"1c0d45d2c4f1938e156d87b399c3737d5cb55e73c2b1c7af70cbdab3a6293cde"} Nov 25 19:37:12 crc kubenswrapper[4775]: I1125 19:37:12.088631 4775 generic.go:334] "Generic (PLEG): container finished" podID="394f2d01-bb7b-49b6-95f3-5430b4987766" containerID="77f58cec8490bcb5509811091708a63006c0d9ca546f0843a861333b39ba765b" exitCode=0 Nov 25 19:37:12 crc kubenswrapper[4775]: I1125 19:37:12.088761 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjqh5" event={"ID":"394f2d01-bb7b-49b6-95f3-5430b4987766","Type":"ContainerDied","Data":"77f58cec8490bcb5509811091708a63006c0d9ca546f0843a861333b39ba765b"} Nov 25 19:37:12 crc kubenswrapper[4775]: I1125 19:37:12.097275 4775 generic.go:334] "Generic (PLEG): container finished" podID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" containerID="88854a2016b9e5c5d40bed64daf14652446b291c79fb69b9dc3d16acbe0c2e69" exitCode=0 Nov 25 19:37:12 crc kubenswrapper[4775]: I1125 19:37:12.097400 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2698" event={"ID":"b45b3f08-fc2c-46cc-b48d-edfc0183c332","Type":"ContainerDied","Data":"88854a2016b9e5c5d40bed64daf14652446b291c79fb69b9dc3d16acbe0c2e69"} Nov 25 19:37:12 crc kubenswrapper[4775]: I1125 19:37:12.105524 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c" exitCode=0 Nov 25 19:37:12 crc kubenswrapper[4775]: I1125 19:37:12.105620 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c"} Nov 25 19:37:12 crc kubenswrapper[4775]: I1125 19:37:12.111078 4775 generic.go:334] "Generic (PLEG): container finished" podID="0e5bac86-7638-49bd-b896-444ff16bc88c" containerID="719c370a9accc8d084a8de28fcd6e6c2e9808879c2b96c422af5ef20ddd96114" exitCode=0 Nov 25 19:37:12 crc kubenswrapper[4775]: I1125 19:37:12.111357 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x5nk" event={"ID":"0e5bac86-7638-49bd-b896-444ff16bc88c","Type":"ContainerDied","Data":"719c370a9accc8d084a8de28fcd6e6c2e9808879c2b96c422af5ef20ddd96114"} Nov 25 19:37:12 crc kubenswrapper[4775]: I1125 19:37:12.111989 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7kd5c" podUID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" containerName="registry-server" containerID="cri-o://269cce66bad05f6a13b5fcc56c039811094ffa6667aa6a916cf9e5f0aae2a584" gracePeriod=2 Nov 25 19:37:12 crc kubenswrapper[4775]: I1125 19:37:12.222234 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p98dh" podStartSLOduration=3.863369691 podStartE2EDuration="1m1.222211341s" podCreationTimestamp="2025-11-25 19:36:11 +0000 UTC" firstStartedPulling="2025-11-25 19:36:12.44531368 +0000 UTC m=+154.361676046" lastFinishedPulling="2025-11-25 19:37:09.80415533 +0000 UTC m=+211.720517696" observedRunningTime="2025-11-25 19:37:12.219241118 +0000 UTC m=+214.135603504" watchObservedRunningTime="2025-11-25 19:37:12.222211341 +0000 UTC m=+214.138573717" Nov 25 19:37:13 crc kubenswrapper[4775]: I1125 19:37:13.121979 4775 generic.go:334] "Generic (PLEG): container finished" podID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" containerID="269cce66bad05f6a13b5fcc56c039811094ffa6667aa6a916cf9e5f0aae2a584" exitCode=0 Nov 25 19:37:13 crc kubenswrapper[4775]: I1125 19:37:13.122147 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kd5c" event={"ID":"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9","Type":"ContainerDied","Data":"269cce66bad05f6a13b5fcc56c039811094ffa6667aa6a916cf9e5f0aae2a584"} Nov 25 19:37:13 crc kubenswrapper[4775]: I1125 19:37:13.126028 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"68860d36c20c28f09e5eee4f954a6781074667da8ec5ed23c8a9114454a7a494"} Nov 25 19:37:14 crc kubenswrapper[4775]: I1125 19:37:14.257805 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:37:14 crc kubenswrapper[4775]: I1125 19:37:14.444683 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-utilities\") pod \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\" (UID: \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\") " Nov 25 19:37:14 crc kubenswrapper[4775]: I1125 19:37:14.445154 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89nks\" (UniqueName: \"kubernetes.io/projected/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-kube-api-access-89nks\") pod \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\" (UID: \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\") " Nov 25 19:37:14 crc kubenswrapper[4775]: I1125 19:37:14.445199 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-catalog-content\") pod \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\" (UID: \"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9\") " Nov 25 19:37:14 crc kubenswrapper[4775]: I1125 19:37:14.445573 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-utilities" (OuterVolumeSpecName: "utilities") pod "6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" (UID: "6fe6fc97-e05b-454e-84e9-b011e4c2d8b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:37:14 crc kubenswrapper[4775]: I1125 19:37:14.453264 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-kube-api-access-89nks" (OuterVolumeSpecName: "kube-api-access-89nks") pod "6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" (UID: "6fe6fc97-e05b-454e-84e9-b011e4c2d8b9"). InnerVolumeSpecName "kube-api-access-89nks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:37:14 crc kubenswrapper[4775]: I1125 19:37:14.516394 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" (UID: "6fe6fc97-e05b-454e-84e9-b011e4c2d8b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:37:14 crc kubenswrapper[4775]: I1125 19:37:14.546589 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:14 crc kubenswrapper[4775]: I1125 19:37:14.546620 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89nks\" (UniqueName: \"kubernetes.io/projected/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-kube-api-access-89nks\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:14 crc kubenswrapper[4775]: I1125 19:37:14.546632 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.146130 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hnvt5" event={"ID":"3516b667-e83c-45a5-9f21-6bf5e0572b9a","Type":"ContainerStarted","Data":"8520d549c9a67cd60bae471fb215e9e3ae8e908cc44faadef963753599b87cf2"} Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.149076 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2698" event={"ID":"b45b3f08-fc2c-46cc-b48d-edfc0183c332","Type":"ContainerStarted","Data":"67bd3ffd7ee4a7092ff9ca4be65d5b2a7dc20e26038560dc52b937eeb147b287"} Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.151674 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kd5c" event={"ID":"6fe6fc97-e05b-454e-84e9-b011e4c2d8b9","Type":"ContainerDied","Data":"281e5930a5f1e84b5f8de3cf031a060ddf2e8595edc4b4cc48783dc79663eb50"} Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.151704 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kd5c" Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.151747 4775 scope.go:117] "RemoveContainer" containerID="269cce66bad05f6a13b5fcc56c039811094ffa6667aa6a916cf9e5f0aae2a584" Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.154521 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x5nk" event={"ID":"0e5bac86-7638-49bd-b896-444ff16bc88c","Type":"ContainerStarted","Data":"19a7f127b7b0e47dd7ba49264d820659c5d56b07748bdf1ebcbf93b1d96a0727"} Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.157048 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brl6t" event={"ID":"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1","Type":"ContainerStarted","Data":"f284c026f059a9fedc8a39245fd1a05f6bfb9045613d4accaceb9a7f41c57fe9"} Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.161927 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjqh5" event={"ID":"394f2d01-bb7b-49b6-95f3-5430b4987766","Type":"ContainerStarted","Data":"90570e46fbdf1a9263d1c4d639f76db4b6cc3c37e8e0159348d99fb816c6d621"} Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.169559 4775 scope.go:117] "RemoveContainer" containerID="23517e2701499712ca3519146c94e32eeec3839c7755ac52a6c97490450ed308" Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.180436 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hnvt5" podStartSLOduration=3.197893134 podStartE2EDuration="1m7.180413806s" podCreationTimestamp="2025-11-25 19:36:08 +0000 UTC" firstStartedPulling="2025-11-25 19:36:10.354185252 +0000 UTC m=+152.270547618" lastFinishedPulling="2025-11-25 19:37:14.336705924 +0000 UTC m=+216.253068290" observedRunningTime="2025-11-25 19:37:15.175084127 +0000 UTC m=+217.091446533" watchObservedRunningTime="2025-11-25 19:37:15.180413806 +0000 UTC m=+217.096776182" Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.195970 4775 scope.go:117] "RemoveContainer" containerID="490107346b8f8cfc6023549cd3b5342480096f52887e32f304d5ea981fa1ffb9" Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.202970 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7kd5c"] Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.216434 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7kd5c"] Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.239388 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4x5nk" podStartSLOduration=3.147753246 podStartE2EDuration="1m3.239360887s" podCreationTimestamp="2025-11-25 19:36:12 +0000 UTC" firstStartedPulling="2025-11-25 19:36:14.598530299 +0000 UTC m=+156.514892665" lastFinishedPulling="2025-11-25 19:37:14.69013794 +0000 UTC m=+216.606500306" observedRunningTime="2025-11-25 19:37:15.235090357 +0000 UTC m=+217.151452733" watchObservedRunningTime="2025-11-25 19:37:15.239360887 +0000 UTC m=+217.155723253" Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.282315 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w2698" podStartSLOduration=3.3101643530000002 podStartE2EDuration="1m4.282293979s" podCreationTimestamp="2025-11-25 19:36:11 +0000 UTC" firstStartedPulling="2025-11-25 19:36:13.551319457 +0000 UTC m=+155.467681823" lastFinishedPulling="2025-11-25 19:37:14.523449073 +0000 UTC m=+216.439811449" observedRunningTime="2025-11-25 19:37:15.279718187 +0000 UTC m=+217.196080573" watchObservedRunningTime="2025-11-25 19:37:15.282293979 +0000 UTC m=+217.198656345" Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.285304 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-brl6t" podStartSLOduration=3.361194199 podStartE2EDuration="1m5.285295513s" podCreationTimestamp="2025-11-25 19:36:10 +0000 UTC" firstStartedPulling="2025-11-25 19:36:12.438089044 +0000 UTC m=+154.354451410" lastFinishedPulling="2025-11-25 19:37:14.362190348 +0000 UTC m=+216.278552724" observedRunningTime="2025-11-25 19:37:15.25876178 +0000 UTC m=+217.175124146" watchObservedRunningTime="2025-11-25 19:37:15.285295513 +0000 UTC m=+217.201657879" Nov 25 19:37:15 crc kubenswrapper[4775]: I1125 19:37:15.302541 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rjqh5" podStartSLOduration=3.065643618 podStartE2EDuration="1m6.302518045s" podCreationTimestamp="2025-11-25 19:36:09 +0000 UTC" firstStartedPulling="2025-11-25 19:36:11.381892814 +0000 UTC m=+153.298255180" lastFinishedPulling="2025-11-25 19:37:14.618767251 +0000 UTC m=+216.535129607" observedRunningTime="2025-11-25 19:37:15.297891606 +0000 UTC m=+217.214253972" watchObservedRunningTime="2025-11-25 19:37:15.302518045 +0000 UTC m=+217.218880411" Nov 25 19:37:16 crc kubenswrapper[4775]: I1125 19:37:16.858520 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" path="/var/lib/kubelet/pods/6fe6fc97-e05b-454e-84e9-b011e4c2d8b9/volumes" Nov 25 19:37:19 crc kubenswrapper[4775]: I1125 19:37:19.018874 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:37:19 crc kubenswrapper[4775]: I1125 19:37:19.019264 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:37:19 crc kubenswrapper[4775]: I1125 19:37:19.082917 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:37:19 crc kubenswrapper[4775]: I1125 19:37:19.242188 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:37:19 crc kubenswrapper[4775]: I1125 19:37:19.243479 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:37:19 crc kubenswrapper[4775]: I1125 19:37:19.247757 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:37:19 crc kubenswrapper[4775]: I1125 19:37:19.319531 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:37:19 crc kubenswrapper[4775]: I1125 19:37:19.629352 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:37:19 crc kubenswrapper[4775]: I1125 19:37:19.629420 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:37:19 crc kubenswrapper[4775]: I1125 19:37:19.687207 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:37:20 crc kubenswrapper[4775]: I1125 19:37:20.270674 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:37:20 crc kubenswrapper[4775]: I1125 19:37:20.277263 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:37:21 crc kubenswrapper[4775]: I1125 19:37:21.011082 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:37:21 crc kubenswrapper[4775]: I1125 19:37:21.011169 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:37:21 crc kubenswrapper[4775]: I1125 19:37:21.068323 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:37:21 crc kubenswrapper[4775]: I1125 19:37:21.286077 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:37:21 crc kubenswrapper[4775]: I1125 19:37:21.490632 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rjqh5"] Nov 25 19:37:21 crc kubenswrapper[4775]: I1125 19:37:21.494346 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:37:21 crc kubenswrapper[4775]: I1125 19:37:21.494460 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:37:21 crc kubenswrapper[4775]: I1125 19:37:21.549549 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.206591 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.207059 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.223084 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rjqh5" podUID="394f2d01-bb7b-49b6-95f3-5430b4987766" containerName="registry-server" containerID="cri-o://90570e46fbdf1a9263d1c4d639f76db4b6cc3c37e8e0159348d99fb816c6d621" gracePeriod=2 Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.270700 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.298217 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.344863 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.647182 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.647458 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.657501 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.661107 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/394f2d01-bb7b-49b6-95f3-5430b4987766-catalog-content\") pod \"394f2d01-bb7b-49b6-95f3-5430b4987766\" (UID: \"394f2d01-bb7b-49b6-95f3-5430b4987766\") " Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.661150 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/394f2d01-bb7b-49b6-95f3-5430b4987766-utilities\") pod \"394f2d01-bb7b-49b6-95f3-5430b4987766\" (UID: \"394f2d01-bb7b-49b6-95f3-5430b4987766\") " Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.661200 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9szv\" (UniqueName: \"kubernetes.io/projected/394f2d01-bb7b-49b6-95f3-5430b4987766-kube-api-access-n9szv\") pod \"394f2d01-bb7b-49b6-95f3-5430b4987766\" (UID: \"394f2d01-bb7b-49b6-95f3-5430b4987766\") " Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.662271 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/394f2d01-bb7b-49b6-95f3-5430b4987766-utilities" (OuterVolumeSpecName: "utilities") pod "394f2d01-bb7b-49b6-95f3-5430b4987766" (UID: "394f2d01-bb7b-49b6-95f3-5430b4987766"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.671262 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/394f2d01-bb7b-49b6-95f3-5430b4987766-kube-api-access-n9szv" (OuterVolumeSpecName: "kube-api-access-n9szv") pod "394f2d01-bb7b-49b6-95f3-5430b4987766" (UID: "394f2d01-bb7b-49b6-95f3-5430b4987766"). InnerVolumeSpecName "kube-api-access-n9szv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.696606 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.723066 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/394f2d01-bb7b-49b6-95f3-5430b4987766-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "394f2d01-bb7b-49b6-95f3-5430b4987766" (UID: "394f2d01-bb7b-49b6-95f3-5430b4987766"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.762396 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/394f2d01-bb7b-49b6-95f3-5430b4987766-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.762433 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/394f2d01-bb7b-49b6-95f3-5430b4987766-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:22 crc kubenswrapper[4775]: I1125 19:37:22.762444 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9szv\" (UniqueName: \"kubernetes.io/projected/394f2d01-bb7b-49b6-95f3-5430b4987766-kube-api-access-n9szv\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.234023 4775 generic.go:334] "Generic (PLEG): container finished" podID="394f2d01-bb7b-49b6-95f3-5430b4987766" containerID="90570e46fbdf1a9263d1c4d639f76db4b6cc3c37e8e0159348d99fb816c6d621" exitCode=0 Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.234153 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjqh5" Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.234338 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjqh5" event={"ID":"394f2d01-bb7b-49b6-95f3-5430b4987766","Type":"ContainerDied","Data":"90570e46fbdf1a9263d1c4d639f76db4b6cc3c37e8e0159348d99fb816c6d621"} Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.234427 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjqh5" event={"ID":"394f2d01-bb7b-49b6-95f3-5430b4987766","Type":"ContainerDied","Data":"c3cce0371c2c2dad56a4f4de9bf721aed26633d3dfc2cb825f7738f7f48f21f9"} Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.234466 4775 scope.go:117] "RemoveContainer" containerID="90570e46fbdf1a9263d1c4d639f76db4b6cc3c37e8e0159348d99fb816c6d621" Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.265237 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rjqh5"] Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.271825 4775 scope.go:117] "RemoveContainer" containerID="77f58cec8490bcb5509811091708a63006c0d9ca546f0843a861333b39ba765b" Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.273970 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rjqh5"] Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.317438 4775 scope.go:117] "RemoveContainer" containerID="b57e1f999620c8e3dcd2093b3015446c1eb51dc0db95df656b4bded16f2f022e" Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.324471 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.355286 4775 scope.go:117] "RemoveContainer" containerID="90570e46fbdf1a9263d1c4d639f76db4b6cc3c37e8e0159348d99fb816c6d621" Nov 25 19:37:23 crc kubenswrapper[4775]: E1125 19:37:23.355961 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90570e46fbdf1a9263d1c4d639f76db4b6cc3c37e8e0159348d99fb816c6d621\": container with ID starting with 90570e46fbdf1a9263d1c4d639f76db4b6cc3c37e8e0159348d99fb816c6d621 not found: ID does not exist" containerID="90570e46fbdf1a9263d1c4d639f76db4b6cc3c37e8e0159348d99fb816c6d621" Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.356877 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90570e46fbdf1a9263d1c4d639f76db4b6cc3c37e8e0159348d99fb816c6d621"} err="failed to get container status \"90570e46fbdf1a9263d1c4d639f76db4b6cc3c37e8e0159348d99fb816c6d621\": rpc error: code = NotFound desc = could not find container \"90570e46fbdf1a9263d1c4d639f76db4b6cc3c37e8e0159348d99fb816c6d621\": container with ID starting with 90570e46fbdf1a9263d1c4d639f76db4b6cc3c37e8e0159348d99fb816c6d621 not found: ID does not exist" Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.356969 4775 scope.go:117] "RemoveContainer" containerID="77f58cec8490bcb5509811091708a63006c0d9ca546f0843a861333b39ba765b" Nov 25 19:37:23 crc kubenswrapper[4775]: E1125 19:37:23.357508 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f58cec8490bcb5509811091708a63006c0d9ca546f0843a861333b39ba765b\": container with ID starting with 77f58cec8490bcb5509811091708a63006c0d9ca546f0843a861333b39ba765b not found: ID does not exist" containerID="77f58cec8490bcb5509811091708a63006c0d9ca546f0843a861333b39ba765b" Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.357602 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f58cec8490bcb5509811091708a63006c0d9ca546f0843a861333b39ba765b"} err="failed to get container status \"77f58cec8490bcb5509811091708a63006c0d9ca546f0843a861333b39ba765b\": rpc error: code = NotFound desc = could not find container \"77f58cec8490bcb5509811091708a63006c0d9ca546f0843a861333b39ba765b\": container with ID starting with 77f58cec8490bcb5509811091708a63006c0d9ca546f0843a861333b39ba765b not found: ID does not exist" Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.357775 4775 scope.go:117] "RemoveContainer" containerID="b57e1f999620c8e3dcd2093b3015446c1eb51dc0db95df656b4bded16f2f022e" Nov 25 19:37:23 crc kubenswrapper[4775]: E1125 19:37:23.362163 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b57e1f999620c8e3dcd2093b3015446c1eb51dc0db95df656b4bded16f2f022e\": container with ID starting with b57e1f999620c8e3dcd2093b3015446c1eb51dc0db95df656b4bded16f2f022e not found: ID does not exist" containerID="b57e1f999620c8e3dcd2093b3015446c1eb51dc0db95df656b4bded16f2f022e" Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.362224 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57e1f999620c8e3dcd2093b3015446c1eb51dc0db95df656b4bded16f2f022e"} err="failed to get container status \"b57e1f999620c8e3dcd2093b3015446c1eb51dc0db95df656b4bded16f2f022e\": rpc error: code = NotFound desc = could not find container \"b57e1f999620c8e3dcd2093b3015446c1eb51dc0db95df656b4bded16f2f022e\": container with ID starting with b57e1f999620c8e3dcd2093b3015446c1eb51dc0db95df656b4bded16f2f022e not found: ID does not exist" Nov 25 19:37:23 crc kubenswrapper[4775]: I1125 19:37:23.885434 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p98dh"] Nov 25 19:37:24 crc kubenswrapper[4775]: I1125 19:37:24.240615 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p98dh" podUID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" containerName="registry-server" containerID="cri-o://f548286d1e7f97124c769aed6fdc5c323dcfedeee3505861b4a76d17b7ec6d0b" gracePeriod=2 Nov 25 19:37:24 crc kubenswrapper[4775]: I1125 19:37:24.855727 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="394f2d01-bb7b-49b6-95f3-5430b4987766" path="/var/lib/kubelet/pods/394f2d01-bb7b-49b6-95f3-5430b4987766/volumes" Nov 25 19:37:24 crc kubenswrapper[4775]: I1125 19:37:24.971283 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" podUID="c4c88511-de83-4eb3-8e7e-b97271361717" containerName="oauth-openshift" containerID="cri-o://5d7c4a1b8f3103e25cf515f27b0218ca6b173d7f438f180f4f986e0145118344" gracePeriod=15 Nov 25 19:37:26 crc kubenswrapper[4775]: I1125 19:37:26.254469 4775 generic.go:334] "Generic (PLEG): container finished" podID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" containerID="f548286d1e7f97124c769aed6fdc5c323dcfedeee3505861b4a76d17b7ec6d0b" exitCode=0 Nov 25 19:37:26 crc kubenswrapper[4775]: I1125 19:37:26.254586 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p98dh" event={"ID":"958d3bd1-ce50-413a-803b-3e2bc2e6ba69","Type":"ContainerDied","Data":"f548286d1e7f97124c769aed6fdc5c323dcfedeee3505861b4a76d17b7ec6d0b"} Nov 25 19:37:26 crc kubenswrapper[4775]: I1125 19:37:26.260017 4775 generic.go:334] "Generic (PLEG): container finished" podID="c4c88511-de83-4eb3-8e7e-b97271361717" containerID="5d7c4a1b8f3103e25cf515f27b0218ca6b173d7f438f180f4f986e0145118344" exitCode=0 Nov 25 19:37:26 crc kubenswrapper[4775]: I1125 19:37:26.260098 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" event={"ID":"c4c88511-de83-4eb3-8e7e-b97271361717","Type":"ContainerDied","Data":"5d7c4a1b8f3103e25cf515f27b0218ca6b173d7f438f180f4f986e0145118344"} Nov 25 19:37:26 crc kubenswrapper[4775]: I1125 19:37:26.304186 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4x5nk"] Nov 25 19:37:26 crc kubenswrapper[4775]: I1125 19:37:26.304567 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4x5nk" podUID="0e5bac86-7638-49bd-b896-444ff16bc88c" containerName="registry-server" containerID="cri-o://19a7f127b7b0e47dd7ba49264d820659c5d56b07748bdf1ebcbf93b1d96a0727" gracePeriod=2 Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.190833 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.206564 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.268238 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" event={"ID":"c4c88511-de83-4eb3-8e7e-b97271361717","Type":"ContainerDied","Data":"ef5abf39f3cd06eea2aaf5d778d236fd2a76d24212074b746e9509ff024bcb0e"} Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.268262 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8dhhr" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.268380 4775 scope.go:117] "RemoveContainer" containerID="5d7c4a1b8f3103e25cf515f27b0218ca6b173d7f438f180f4f986e0145118344" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.273805 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p98dh" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.273836 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p98dh" event={"ID":"958d3bd1-ce50-413a-803b-3e2bc2e6ba69","Type":"ContainerDied","Data":"794cc2e962953c568dc86792f6048d002a8c4b991e7e98e17f58f70def2d6953"} Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.279800 4775 generic.go:334] "Generic (PLEG): container finished" podID="0e5bac86-7638-49bd-b896-444ff16bc88c" containerID="19a7f127b7b0e47dd7ba49264d820659c5d56b07748bdf1ebcbf93b1d96a0727" exitCode=0 Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.279847 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x5nk" event={"ID":"0e5bac86-7638-49bd-b896-444ff16bc88c","Type":"ContainerDied","Data":"19a7f127b7b0e47dd7ba49264d820659c5d56b07748bdf1ebcbf93b1d96a0727"} Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.296432 4775 scope.go:117] "RemoveContainer" containerID="f548286d1e7f97124c769aed6fdc5c323dcfedeee3505861b4a76d17b7ec6d0b" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.316142 4775 scope.go:117] "RemoveContainer" containerID="6fb74f15a9dd517488ef94d9d92789c96315b3e773c894423ed40866ddd2d139" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.330722 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-cliconfig\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.330768 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-session\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.330805 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-serving-cert\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.330836 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-provider-selection\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.330870 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-idp-0-file-data\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.330899 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-error\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.330920 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-utilities\") pod \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\" (UID: \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.330944 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-trusted-ca-bundle\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.330973 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phtjb\" (UniqueName: \"kubernetes.io/projected/c4c88511-de83-4eb3-8e7e-b97271361717-kube-api-access-phtjb\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.330987 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-catalog-content\") pod \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\" (UID: \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.331009 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-ocp-branding-template\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.331035 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-audit-policies\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.331051 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm2p7\" (UniqueName: \"kubernetes.io/projected/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-kube-api-access-vm2p7\") pod \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\" (UID: \"958d3bd1-ce50-413a-803b-3e2bc2e6ba69\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.331073 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-router-certs\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.331094 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-login\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.331125 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-service-ca\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.331141 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4c88511-de83-4eb3-8e7e-b97271361717-audit-dir\") pod \"c4c88511-de83-4eb3-8e7e-b97271361717\" (UID: \"c4c88511-de83-4eb3-8e7e-b97271361717\") " Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.331360 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4c88511-de83-4eb3-8e7e-b97271361717-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.332365 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.333869 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.336265 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.336265 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-utilities" (OuterVolumeSpecName: "utilities") pod "958d3bd1-ce50-413a-803b-3e2bc2e6ba69" (UID: "958d3bd1-ce50-413a-803b-3e2bc2e6ba69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.336532 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.338674 4775 scope.go:117] "RemoveContainer" containerID="4cf069afbca3b2929b35c49da0fd8ca7a1649bc5c4ab71087fe8ccb0f8dcbc41" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.346603 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.346812 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c88511-de83-4eb3-8e7e-b97271361717-kube-api-access-phtjb" (OuterVolumeSpecName: "kube-api-access-phtjb") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "kube-api-access-phtjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.346934 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-kube-api-access-vm2p7" (OuterVolumeSpecName: "kube-api-access-vm2p7") pod "958d3bd1-ce50-413a-803b-3e2bc2e6ba69" (UID: "958d3bd1-ce50-413a-803b-3e2bc2e6ba69"). InnerVolumeSpecName "kube-api-access-vm2p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.348204 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.348793 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.350063 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.351927 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.352507 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.354621 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.355383 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c4c88511-de83-4eb3-8e7e-b97271361717" (UID: "c4c88511-de83-4eb3-8e7e-b97271361717"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.356504 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "958d3bd1-ce50-413a-803b-3e2bc2e6ba69" (UID: "958d3bd1-ce50-413a-803b-3e2bc2e6ba69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433278 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433358 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433393 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433427 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433467 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433497 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433528 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433563 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433595 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433626 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phtjb\" (UniqueName: \"kubernetes.io/projected/c4c88511-de83-4eb3-8e7e-b97271361717-kube-api-access-phtjb\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433684 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433706 4775 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433730 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm2p7\" (UniqueName: \"kubernetes.io/projected/958d3bd1-ce50-413a-803b-3e2bc2e6ba69-kube-api-access-vm2p7\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433802 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433828 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433849 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4c88511-de83-4eb3-8e7e-b97271361717-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.433869 4775 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4c88511-de83-4eb3-8e7e-b97271361717-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.634781 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8dhhr"] Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.651108 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8dhhr"] Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.655977 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p98dh"] Nov 25 19:37:27 crc kubenswrapper[4775]: I1125 19:37:27.659612 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p98dh"] Nov 25 19:37:28 crc kubenswrapper[4775]: I1125 19:37:28.872396 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" path="/var/lib/kubelet/pods/958d3bd1-ce50-413a-803b-3e2bc2e6ba69/volumes" Nov 25 19:37:28 crc kubenswrapper[4775]: I1125 19:37:28.873583 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c88511-de83-4eb3-8e7e-b97271361717" path="/var/lib/kubelet/pods/c4c88511-de83-4eb3-8e7e-b97271361717/volumes" Nov 25 19:37:28 crc kubenswrapper[4775]: I1125 19:37:28.964890 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.061360 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2htw\" (UniqueName: \"kubernetes.io/projected/0e5bac86-7638-49bd-b896-444ff16bc88c-kube-api-access-v2htw\") pod \"0e5bac86-7638-49bd-b896-444ff16bc88c\" (UID: \"0e5bac86-7638-49bd-b896-444ff16bc88c\") " Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.061683 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5bac86-7638-49bd-b896-444ff16bc88c-catalog-content\") pod \"0e5bac86-7638-49bd-b896-444ff16bc88c\" (UID: \"0e5bac86-7638-49bd-b896-444ff16bc88c\") " Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.061784 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5bac86-7638-49bd-b896-444ff16bc88c-utilities\") pod \"0e5bac86-7638-49bd-b896-444ff16bc88c\" (UID: \"0e5bac86-7638-49bd-b896-444ff16bc88c\") " Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.062766 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e5bac86-7638-49bd-b896-444ff16bc88c-utilities" (OuterVolumeSpecName: "utilities") pod "0e5bac86-7638-49bd-b896-444ff16bc88c" (UID: "0e5bac86-7638-49bd-b896-444ff16bc88c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.067283 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e5bac86-7638-49bd-b896-444ff16bc88c-kube-api-access-v2htw" (OuterVolumeSpecName: "kube-api-access-v2htw") pod "0e5bac86-7638-49bd-b896-444ff16bc88c" (UID: "0e5bac86-7638-49bd-b896-444ff16bc88c"). InnerVolumeSpecName "kube-api-access-v2htw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.152042 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e5bac86-7638-49bd-b896-444ff16bc88c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e5bac86-7638-49bd-b896-444ff16bc88c" (UID: "0e5bac86-7638-49bd-b896-444ff16bc88c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.163129 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5bac86-7638-49bd-b896-444ff16bc88c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.163182 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2htw\" (UniqueName: \"kubernetes.io/projected/0e5bac86-7638-49bd-b896-444ff16bc88c-kube-api-access-v2htw\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.163198 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5bac86-7638-49bd-b896-444ff16bc88c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.304226 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x5nk" event={"ID":"0e5bac86-7638-49bd-b896-444ff16bc88c","Type":"ContainerDied","Data":"4dcb62a2fac6b2e5aee6021c2e99e4b90e789966ba79f5a09971a4035fe5d0bf"} Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.304777 4775 scope.go:117] "RemoveContainer" containerID="19a7f127b7b0e47dd7ba49264d820659c5d56b07748bdf1ebcbf93b1d96a0727" Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.304322 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4x5nk" Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.323786 4775 scope.go:117] "RemoveContainer" containerID="719c370a9accc8d084a8de28fcd6e6c2e9808879c2b96c422af5ef20ddd96114" Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.349036 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4x5nk"] Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.349055 4775 scope.go:117] "RemoveContainer" containerID="fb94ff879f4a65f332b9620d7048cc871b5bef63a7f5e58e68f3a25faf46b4b9" Nov 25 19:37:29 crc kubenswrapper[4775]: I1125 19:37:29.357382 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4x5nk"] Nov 25 19:37:30 crc kubenswrapper[4775]: I1125 19:37:30.858640 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e5bac86-7638-49bd-b896-444ff16bc88c" path="/var/lib/kubelet/pods/0e5bac86-7638-49bd-b896-444ff16bc88c/volumes" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.426587 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5dc57f868f-tkjdt"] Nov 25 19:37:35 crc kubenswrapper[4775]: E1125 19:37:35.427529 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5bac86-7638-49bd-b896-444ff16bc88c" containerName="extract-utilities" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427543 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5bac86-7638-49bd-b896-444ff16bc88c" containerName="extract-utilities" Nov 25 19:37:35 crc kubenswrapper[4775]: E1125 19:37:35.427551 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="394f2d01-bb7b-49b6-95f3-5430b4987766" containerName="extract-content" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427576 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="394f2d01-bb7b-49b6-95f3-5430b4987766" containerName="extract-content" Nov 25 19:37:35 crc kubenswrapper[4775]: E1125 19:37:35.427591 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5bac86-7638-49bd-b896-444ff16bc88c" containerName="extract-content" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427597 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5bac86-7638-49bd-b896-444ff16bc88c" containerName="extract-content" Nov 25 19:37:35 crc kubenswrapper[4775]: E1125 19:37:35.427609 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="394f2d01-bb7b-49b6-95f3-5430b4987766" containerName="extract-utilities" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427615 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="394f2d01-bb7b-49b6-95f3-5430b4987766" containerName="extract-utilities" Nov 25 19:37:35 crc kubenswrapper[4775]: E1125 19:37:35.427626 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="394f2d01-bb7b-49b6-95f3-5430b4987766" containerName="registry-server" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427632 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="394f2d01-bb7b-49b6-95f3-5430b4987766" containerName="registry-server" Nov 25 19:37:35 crc kubenswrapper[4775]: E1125 19:37:35.427641 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" containerName="extract-content" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427678 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" containerName="extract-content" Nov 25 19:37:35 crc kubenswrapper[4775]: E1125 19:37:35.427688 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c88511-de83-4eb3-8e7e-b97271361717" containerName="oauth-openshift" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427695 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c88511-de83-4eb3-8e7e-b97271361717" containerName="oauth-openshift" Nov 25 19:37:35 crc kubenswrapper[4775]: E1125 19:37:35.427704 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" containerName="extract-utilities" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427710 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" containerName="extract-utilities" Nov 25 19:37:35 crc kubenswrapper[4775]: E1125 19:37:35.427723 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" containerName="extract-utilities" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427729 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" containerName="extract-utilities" Nov 25 19:37:35 crc kubenswrapper[4775]: E1125 19:37:35.427738 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" containerName="extract-content" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427745 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" containerName="extract-content" Nov 25 19:37:35 crc kubenswrapper[4775]: E1125 19:37:35.427752 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" containerName="registry-server" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427759 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" containerName="registry-server" Nov 25 19:37:35 crc kubenswrapper[4775]: E1125 19:37:35.427773 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5bac86-7638-49bd-b896-444ff16bc88c" containerName="registry-server" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427779 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5bac86-7638-49bd-b896-444ff16bc88c" containerName="registry-server" Nov 25 19:37:35 crc kubenswrapper[4775]: E1125 19:37:35.427790 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" containerName="registry-server" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427796 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" containerName="registry-server" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427919 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c88511-de83-4eb3-8e7e-b97271361717" containerName="oauth-openshift" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427931 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e5bac86-7638-49bd-b896-444ff16bc88c" containerName="registry-server" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427941 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="394f2d01-bb7b-49b6-95f3-5430b4987766" containerName="registry-server" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427952 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="958d3bd1-ce50-413a-803b-3e2bc2e6ba69" containerName="registry-server" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.427960 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fe6fc97-e05b-454e-84e9-b011e4c2d8b9" containerName="registry-server" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.428483 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.435681 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.436553 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.436730 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.436989 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.437116 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.437166 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.437583 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.437864 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.438163 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.439122 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.439977 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.442747 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.451848 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.454040 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.454069 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dc57f868f-tkjdt"] Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474297 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474358 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474393 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/68153acd-a0d8-4d45-b70d-9151cabdb18d-audit-dir\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474418 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/68153acd-a0d8-4d45-b70d-9151cabdb18d-audit-policies\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474442 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czrm7\" (UniqueName: \"kubernetes.io/projected/68153acd-a0d8-4d45-b70d-9151cabdb18d-kube-api-access-czrm7\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474493 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474543 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-user-template-login\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474601 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-session\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474670 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474705 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474748 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-user-template-error\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474785 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474824 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.474862 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.476060 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.575687 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/68153acd-a0d8-4d45-b70d-9151cabdb18d-audit-policies\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.575761 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czrm7\" (UniqueName: \"kubernetes.io/projected/68153acd-a0d8-4d45-b70d-9151cabdb18d-kube-api-access-czrm7\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.575788 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.575814 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-user-template-login\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.575851 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-session\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.575874 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.575896 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.575917 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-user-template-error\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.575936 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.575960 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.575986 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.576029 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.576066 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.576091 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/68153acd-a0d8-4d45-b70d-9151cabdb18d-audit-dir\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.576213 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/68153acd-a0d8-4d45-b70d-9151cabdb18d-audit-dir\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.577301 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/68153acd-a0d8-4d45-b70d-9151cabdb18d-audit-policies\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.578468 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.578920 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.580567 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.584794 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-user-template-error\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.587148 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-session\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.587797 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.588059 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.588240 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-user-template-login\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.588293 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.589697 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.590817 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/68153acd-a0d8-4d45-b70d-9151cabdb18d-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.601557 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czrm7\" (UniqueName: \"kubernetes.io/projected/68153acd-a0d8-4d45-b70d-9151cabdb18d-kube-api-access-czrm7\") pod \"oauth-openshift-5dc57f868f-tkjdt\" (UID: \"68153acd-a0d8-4d45-b70d-9151cabdb18d\") " pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:35 crc kubenswrapper[4775]: I1125 19:37:35.749990 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:36 crc kubenswrapper[4775]: I1125 19:37:36.053846 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dc57f868f-tkjdt"] Nov 25 19:37:36 crc kubenswrapper[4775]: I1125 19:37:36.356711 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" event={"ID":"68153acd-a0d8-4d45-b70d-9151cabdb18d","Type":"ContainerStarted","Data":"211a0a9dc2d12c52525f85c9fd5f2fa05abc48e874932c9cf88c4fd28f9e0d46"} Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.305642 4775 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.306510 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210" gracePeriod=15 Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.306740 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a" gracePeriod=15 Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.306809 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692" gracePeriod=15 Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.306865 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9" gracePeriod=15 Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.306922 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1" gracePeriod=15 Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.313631 4775 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 19:37:37 crc kubenswrapper[4775]: E1125 19:37:37.315459 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.315525 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 19:37:37 crc kubenswrapper[4775]: E1125 19:37:37.315555 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.315569 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 19:37:37 crc kubenswrapper[4775]: E1125 19:37:37.315593 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.315606 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 19:37:37 crc kubenswrapper[4775]: E1125 19:37:37.315626 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.315642 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 19:37:37 crc kubenswrapper[4775]: E1125 19:37:37.315688 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.315702 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 19:37:37 crc kubenswrapper[4775]: E1125 19:37:37.315725 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.315738 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 19:37:37 crc kubenswrapper[4775]: E1125 19:37:37.315757 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.315769 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.316039 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.316064 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.316087 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.316108 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.316125 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.316143 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.316159 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 19:37:37 crc kubenswrapper[4775]: E1125 19:37:37.316359 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.316373 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.319431 4775 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.322383 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.330893 4775 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.369057 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" event={"ID":"68153acd-a0d8-4d45-b70d-9151cabdb18d","Type":"ContainerStarted","Data":"dfcc2c1fdf6a2885d603974918411e39095cdca7a800faaf220747055f470aad"} Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.369909 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.379540 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.506344 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.506435 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.506578 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.506695 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.506739 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.506815 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.507216 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.507353 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.608993 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.609587 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.609175 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.609638 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.609792 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.609851 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.609878 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.609956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.609999 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.610052 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.610091 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.610144 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.610185 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.610207 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.610274 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:37 crc kubenswrapper[4775]: I1125 19:37:37.610292 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:38 crc kubenswrapper[4775]: I1125 19:37:38.381934 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 19:37:38 crc kubenswrapper[4775]: I1125 19:37:38.386248 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 19:37:38 crc kubenswrapper[4775]: I1125 19:37:38.388313 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a" exitCode=0 Nov 25 19:37:38 crc kubenswrapper[4775]: I1125 19:37:38.388366 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692" exitCode=0 Nov 25 19:37:38 crc kubenswrapper[4775]: I1125 19:37:38.388385 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9" exitCode=0 Nov 25 19:37:38 crc kubenswrapper[4775]: I1125 19:37:38.388399 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1" exitCode=2 Nov 25 19:37:38 crc kubenswrapper[4775]: I1125 19:37:38.388477 4775 scope.go:117] "RemoveContainer" containerID="bae0dc8980ff2cca94e11469b963ab22986d72050575958edffb3681dbdc0e89" Nov 25 19:37:38 crc kubenswrapper[4775]: I1125 19:37:38.392489 4775 generic.go:334] "Generic (PLEG): container finished" podID="103567fc-c7f2-4a0a-ba9b-674148cbda9f" containerID="a972dc879cb430ecf27bf8eb4961fc7666a49ef307f156d3804f3f489f37fc9e" exitCode=0 Nov 25 19:37:38 crc kubenswrapper[4775]: I1125 19:37:38.392600 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"103567fc-c7f2-4a0a-ba9b-674148cbda9f","Type":"ContainerDied","Data":"a972dc879cb430ecf27bf8eb4961fc7666a49ef307f156d3804f3f489f37fc9e"} Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.428843 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 19:37:39 crc kubenswrapper[4775]: E1125 19:37:39.604471 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:39 crc kubenswrapper[4775]: E1125 19:37:39.605008 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:39 crc kubenswrapper[4775]: E1125 19:37:39.605523 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:39 crc kubenswrapper[4775]: E1125 19:37:39.605827 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:39 crc kubenswrapper[4775]: E1125 19:37:39.606020 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.606047 4775 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 25 19:37:39 crc kubenswrapper[4775]: E1125 19:37:39.606195 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="200ms" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.803329 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.804267 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.805527 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 19:37:39 crc kubenswrapper[4775]: E1125 19:37:39.807120 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="400ms" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.948426 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/103567fc-c7f2-4a0a-ba9b-674148cbda9f-kube-api-access\") pod \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\" (UID: \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\") " Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.948538 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.948744 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/103567fc-c7f2-4a0a-ba9b-674148cbda9f-kubelet-dir\") pod \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\" (UID: \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\") " Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.948819 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.948837 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.948911 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/103567fc-c7f2-4a0a-ba9b-674148cbda9f-var-lock\") pod \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\" (UID: \"103567fc-c7f2-4a0a-ba9b-674148cbda9f\") " Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.948929 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.948941 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/103567fc-c7f2-4a0a-ba9b-674148cbda9f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "103567fc-c7f2-4a0a-ba9b-674148cbda9f" (UID: "103567fc-c7f2-4a0a-ba9b-674148cbda9f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.949090 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.949082 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/103567fc-c7f2-4a0a-ba9b-674148cbda9f-var-lock" (OuterVolumeSpecName: "var-lock") pod "103567fc-c7f2-4a0a-ba9b-674148cbda9f" (UID: "103567fc-c7f2-4a0a-ba9b-674148cbda9f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.949137 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.949603 4775 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/103567fc-c7f2-4a0a-ba9b-674148cbda9f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.949642 4775 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.949705 4775 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/103567fc-c7f2-4a0a-ba9b-674148cbda9f-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.949730 4775 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.949754 4775 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:39 crc kubenswrapper[4775]: I1125 19:37:39.959254 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103567fc-c7f2-4a0a-ba9b-674148cbda9f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "103567fc-c7f2-4a0a-ba9b-674148cbda9f" (UID: "103567fc-c7f2-4a0a-ba9b-674148cbda9f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.051702 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/103567fc-c7f2-4a0a-ba9b-674148cbda9f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 19:37:40 crc kubenswrapper[4775]: E1125 19:37:40.209202 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="800ms" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.444774 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.444927 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"103567fc-c7f2-4a0a-ba9b-674148cbda9f","Type":"ContainerDied","Data":"2e184b4f10acc0e0d3e2ef011a6f25b5b42eba10af85b099afdba1586ee56e45"} Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.444985 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e184b4f10acc0e0d3e2ef011a6f25b5b42eba10af85b099afdba1586ee56e45" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.449091 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.450143 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210" exitCode=0 Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.450230 4775 scope.go:117] "RemoveContainer" containerID="138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.450271 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.487403 4775 scope.go:117] "RemoveContainer" containerID="c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.514349 4775 scope.go:117] "RemoveContainer" containerID="7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.539593 4775 scope.go:117] "RemoveContainer" containerID="edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.569724 4775 scope.go:117] "RemoveContainer" containerID="74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.608347 4775 scope.go:117] "RemoveContainer" containerID="381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.638415 4775 scope.go:117] "RemoveContainer" containerID="138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a" Nov 25 19:37:40 crc kubenswrapper[4775]: E1125 19:37:40.639127 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\": container with ID starting with 138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a not found: ID does not exist" containerID="138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.639195 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a"} err="failed to get container status \"138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\": rpc error: code = NotFound desc = could not find container \"138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a\": container with ID starting with 138532f03f708ba384712b616316381b0335774e384d0e968c53a4937b51715a not found: ID does not exist" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.639346 4775 scope.go:117] "RemoveContainer" containerID="c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692" Nov 25 19:37:40 crc kubenswrapper[4775]: E1125 19:37:40.639903 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\": container with ID starting with c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692 not found: ID does not exist" containerID="c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.640010 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692"} err="failed to get container status \"c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\": rpc error: code = NotFound desc = could not find container \"c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692\": container with ID starting with c6b5670ae2a35c7498a47554962aeff3e56c26d7b73e3c619f9e47757c7f8692 not found: ID does not exist" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.640081 4775 scope.go:117] "RemoveContainer" containerID="7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9" Nov 25 19:37:40 crc kubenswrapper[4775]: E1125 19:37:40.640533 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\": container with ID starting with 7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9 not found: ID does not exist" containerID="7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.640561 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9"} err="failed to get container status \"7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\": rpc error: code = NotFound desc = could not find container \"7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9\": container with ID starting with 7db7be5b8b03c6517629bc6ee8fceca8586e2ac8eab4f86e1017dcc1e51df0a9 not found: ID does not exist" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.640580 4775 scope.go:117] "RemoveContainer" containerID="edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1" Nov 25 19:37:40 crc kubenswrapper[4775]: E1125 19:37:40.640881 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\": container with ID starting with edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1 not found: ID does not exist" containerID="edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.640912 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1"} err="failed to get container status \"edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\": rpc error: code = NotFound desc = could not find container \"edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1\": container with ID starting with edf107ee8703cdd552b4d9727cbf2b6ac2086c8bd65eea970a5270b97e14bfe1 not found: ID does not exist" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.640929 4775 scope.go:117] "RemoveContainer" containerID="74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210" Nov 25 19:37:40 crc kubenswrapper[4775]: E1125 19:37:40.641900 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\": container with ID starting with 74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210 not found: ID does not exist" containerID="74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.641930 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210"} err="failed to get container status \"74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\": rpc error: code = NotFound desc = could not find container \"74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210\": container with ID starting with 74c08ad8fed16a23a37dcadbee5839b34faa9bd4f98f3b19463f5c94b5299210 not found: ID does not exist" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.641948 4775 scope.go:117] "RemoveContainer" containerID="381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32" Nov 25 19:37:40 crc kubenswrapper[4775]: E1125 19:37:40.642952 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\": container with ID starting with 381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32 not found: ID does not exist" containerID="381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.642979 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32"} err="failed to get container status \"381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\": rpc error: code = NotFound desc = could not find container \"381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32\": container with ID starting with 381b9c78b0e52f2db918f43e44c70bb79ea8fc53de49de2ca109c3967c42da32 not found: ID does not exist" Nov 25 19:37:40 crc kubenswrapper[4775]: I1125 19:37:40.860225 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 25 19:37:41 crc kubenswrapper[4775]: E1125 19:37:41.010685 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="1.6s" Nov 25 19:37:42 crc kubenswrapper[4775]: E1125 19:37:42.372530 4775 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.248:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:42 crc kubenswrapper[4775]: I1125 19:37:42.373257 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:42 crc kubenswrapper[4775]: I1125 19:37:42.378837 4775 status_manager.go:851] "Failed to get status for pod" podUID="68153acd-a0d8-4d45-b70d-9151cabdb18d" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5dc57f868f-tkjdt\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:42 crc kubenswrapper[4775]: I1125 19:37:42.383701 4775 status_manager.go:851] "Failed to get status for pod" podUID="103567fc-c7f2-4a0a-ba9b-674148cbda9f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:42 crc kubenswrapper[4775]: I1125 19:37:42.384417 4775 status_manager.go:851] "Failed to get status for pod" podUID="68153acd-a0d8-4d45-b70d-9151cabdb18d" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5dc57f868f-tkjdt\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:42 crc kubenswrapper[4775]: I1125 19:37:42.385777 4775 status_manager.go:851] "Failed to get status for pod" podUID="103567fc-c7f2-4a0a-ba9b-674148cbda9f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:42 crc kubenswrapper[4775]: I1125 19:37:42.386888 4775 status_manager.go:851] "Failed to get status for pod" podUID="68153acd-a0d8-4d45-b70d-9151cabdb18d" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5dc57f868f-tkjdt\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:42 crc kubenswrapper[4775]: W1125 19:37:42.408876 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-189e88b18ffd54ec12cb1bae99471bf85c15eb44153377e02a159109141c42f1 WatchSource:0}: Error finding container 189e88b18ffd54ec12cb1bae99471bf85c15eb44153377e02a159109141c42f1: Status 404 returned error can't find the container with id 189e88b18ffd54ec12cb1bae99471bf85c15eb44153377e02a159109141c42f1 Nov 25 19:37:42 crc kubenswrapper[4775]: E1125 19:37:42.414514 4775 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.248:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b571d4da28fb0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 19:37:42.413848496 +0000 UTC m=+244.330210862,LastTimestamp:2025-11-25 19:37:42.413848496 +0000 UTC m=+244.330210862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 19:37:42 crc kubenswrapper[4775]: I1125 19:37:42.469123 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"189e88b18ffd54ec12cb1bae99471bf85c15eb44153377e02a159109141c42f1"} Nov 25 19:37:42 crc kubenswrapper[4775]: E1125 19:37:42.611581 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="3.2s" Nov 25 19:37:43 crc kubenswrapper[4775]: I1125 19:37:43.488508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"47888a17ba4af6358edd03635f15ec3e88ad4577b6aa02bea7a4c154bff78b6d"} Nov 25 19:37:43 crc kubenswrapper[4775]: I1125 19:37:43.491092 4775 status_manager.go:851] "Failed to get status for pod" podUID="103567fc-c7f2-4a0a-ba9b-674148cbda9f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:43 crc kubenswrapper[4775]: E1125 19:37:43.492223 4775 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.248:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:43 crc kubenswrapper[4775]: I1125 19:37:43.492923 4775 status_manager.go:851] "Failed to get status for pod" podUID="68153acd-a0d8-4d45-b70d-9151cabdb18d" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5dc57f868f-tkjdt\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:44 crc kubenswrapper[4775]: E1125 19:37:44.498119 4775 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.248:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:37:45 crc kubenswrapper[4775]: E1125 19:37:45.812739 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.248:6443: connect: connection refused" interval="6.4s" Nov 25 19:37:48 crc kubenswrapper[4775]: I1125 19:37:48.849624 4775 status_manager.go:851] "Failed to get status for pod" podUID="68153acd-a0d8-4d45-b70d-9151cabdb18d" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5dc57f868f-tkjdt\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:48 crc kubenswrapper[4775]: I1125 19:37:48.850483 4775 status_manager.go:851] "Failed to get status for pod" podUID="103567fc-c7f2-4a0a-ba9b-674148cbda9f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:49 crc kubenswrapper[4775]: I1125 19:37:49.846341 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:49 crc kubenswrapper[4775]: I1125 19:37:49.848183 4775 status_manager.go:851] "Failed to get status for pod" podUID="103567fc-c7f2-4a0a-ba9b-674148cbda9f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:49 crc kubenswrapper[4775]: I1125 19:37:49.848843 4775 status_manager.go:851] "Failed to get status for pod" podUID="68153acd-a0d8-4d45-b70d-9151cabdb18d" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5dc57f868f-tkjdt\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:49 crc kubenswrapper[4775]: I1125 19:37:49.866678 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31e75bd7-c713-4504-a912-0ebfdad65c3b" Nov 25 19:37:49 crc kubenswrapper[4775]: I1125 19:37:49.866729 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31e75bd7-c713-4504-a912-0ebfdad65c3b" Nov 25 19:37:49 crc kubenswrapper[4775]: E1125 19:37:49.867275 4775 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:49 crc kubenswrapper[4775]: I1125 19:37:49.868000 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:50 crc kubenswrapper[4775]: I1125 19:37:50.547773 4775 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="cdcac3c5a431215af2cd63f1fc073b27899012047fdcf4171f8765c1824177ec" exitCode=0 Nov 25 19:37:50 crc kubenswrapper[4775]: I1125 19:37:50.547889 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"cdcac3c5a431215af2cd63f1fc073b27899012047fdcf4171f8765c1824177ec"} Nov 25 19:37:50 crc kubenswrapper[4775]: I1125 19:37:50.548301 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d1451b6ed00de8e4d8934da203608af257cc50da410efbae1c0a6a6c28ea7912"} Nov 25 19:37:50 crc kubenswrapper[4775]: I1125 19:37:50.548677 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31e75bd7-c713-4504-a912-0ebfdad65c3b" Nov 25 19:37:50 crc kubenswrapper[4775]: I1125 19:37:50.548705 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31e75bd7-c713-4504-a912-0ebfdad65c3b" Nov 25 19:37:50 crc kubenswrapper[4775]: E1125 19:37:50.549250 4775 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:50 crc kubenswrapper[4775]: I1125 19:37:50.549260 4775 status_manager.go:851] "Failed to get status for pod" podUID="103567fc-c7f2-4a0a-ba9b-674148cbda9f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:50 crc kubenswrapper[4775]: I1125 19:37:50.549746 4775 status_manager.go:851] "Failed to get status for pod" podUID="68153acd-a0d8-4d45-b70d-9151cabdb18d" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5dc57f868f-tkjdt\": dial tcp 38.102.83.248:6443: connect: connection refused" Nov 25 19:37:51 crc kubenswrapper[4775]: I1125 19:37:51.567041 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"77a400e5ef8a37f0b72a2bba31f0f9a27b0b1f76f06a411a1abf73c5fb0e30cf"} Nov 25 19:37:51 crc kubenswrapper[4775]: I1125 19:37:51.567177 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e38d1a8f7fb8a39bd5470577d29d0923be9f0a9d083fa5b8a0cb816e03505d08"} Nov 25 19:37:51 crc kubenswrapper[4775]: I1125 19:37:51.567188 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5124f066919c1e7731bd9af4b699e0350e5098a48b44ed375da7b0f3a4d1aea7"} Nov 25 19:37:51 crc kubenswrapper[4775]: I1125 19:37:51.570604 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 19:37:51 crc kubenswrapper[4775]: I1125 19:37:51.570672 4775 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71" exitCode=1 Nov 25 19:37:51 crc kubenswrapper[4775]: I1125 19:37:51.570704 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71"} Nov 25 19:37:51 crc kubenswrapper[4775]: I1125 19:37:51.571324 4775 scope.go:117] "RemoveContainer" containerID="b2057cdfd03d06d7c2445e8b7a4f66bb40939fb02f034e0f410d47a631b98a71" Nov 25 19:37:52 crc kubenswrapper[4775]: I1125 19:37:52.591634 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ff3f5f93bd8119af8e31d14bd2097405b271f7c422a9d9c80dc26a4831311643"} Nov 25 19:37:52 crc kubenswrapper[4775]: I1125 19:37:52.592096 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"697dbbdea653fa349685d3e9988b25fffd5a333cd0d67f805b57c4c8be989cc2"} Nov 25 19:37:52 crc kubenswrapper[4775]: I1125 19:37:52.592118 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:52 crc kubenswrapper[4775]: I1125 19:37:52.591924 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31e75bd7-c713-4504-a912-0ebfdad65c3b" Nov 25 19:37:52 crc kubenswrapper[4775]: I1125 19:37:52.592149 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31e75bd7-c713-4504-a912-0ebfdad65c3b" Nov 25 19:37:52 crc kubenswrapper[4775]: I1125 19:37:52.597848 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 19:37:52 crc kubenswrapper[4775]: I1125 19:37:52.597913 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4c9fbbe1e0297fe186058cafceae0ba589fd246087445e47b0db2f6dead0d309"} Nov 25 19:37:53 crc kubenswrapper[4775]: I1125 19:37:53.478782 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:37:54 crc kubenswrapper[4775]: I1125 19:37:54.869165 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:54 crc kubenswrapper[4775]: I1125 19:37:54.869302 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:54 crc kubenswrapper[4775]: I1125 19:37:54.876323 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:57 crc kubenswrapper[4775]: I1125 19:37:57.600948 4775 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:57 crc kubenswrapper[4775]: I1125 19:37:57.633402 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31e75bd7-c713-4504-a912-0ebfdad65c3b" Nov 25 19:37:57 crc kubenswrapper[4775]: I1125 19:37:57.633817 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31e75bd7-c713-4504-a912-0ebfdad65c3b" Nov 25 19:37:57 crc kubenswrapper[4775]: I1125 19:37:57.637157 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:37:58 crc kubenswrapper[4775]: I1125 19:37:58.641240 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31e75bd7-c713-4504-a912-0ebfdad65c3b" Nov 25 19:37:58 crc kubenswrapper[4775]: I1125 19:37:58.641292 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31e75bd7-c713-4504-a912-0ebfdad65c3b" Nov 25 19:37:58 crc kubenswrapper[4775]: I1125 19:37:58.872169 4775 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7dc489e4-0587-48fb-8668-4de184ccf476" Nov 25 19:37:59 crc kubenswrapper[4775]: I1125 19:37:59.849285 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:37:59 crc kubenswrapper[4775]: I1125 19:37:59.856959 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:38:03 crc kubenswrapper[4775]: I1125 19:38:03.486609 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 19:38:06 crc kubenswrapper[4775]: I1125 19:38:06.972878 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 19:38:07 crc kubenswrapper[4775]: I1125 19:38:07.626193 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 19:38:07 crc kubenswrapper[4775]: I1125 19:38:07.702986 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 19:38:07 crc kubenswrapper[4775]: I1125 19:38:07.743180 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 19:38:07 crc kubenswrapper[4775]: I1125 19:38:07.932345 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 19:38:08 crc kubenswrapper[4775]: I1125 19:38:08.274952 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 19:38:08 crc kubenswrapper[4775]: I1125 19:38:08.419872 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 19:38:08 crc kubenswrapper[4775]: I1125 19:38:08.882460 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 19:38:08 crc kubenswrapper[4775]: I1125 19:38:08.976267 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 19:38:09 crc kubenswrapper[4775]: I1125 19:38:09.042621 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 19:38:09 crc kubenswrapper[4775]: I1125 19:38:09.356581 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 19:38:09 crc kubenswrapper[4775]: I1125 19:38:09.932116 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 19:38:09 crc kubenswrapper[4775]: I1125 19:38:09.957917 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 19:38:10 crc kubenswrapper[4775]: I1125 19:38:10.321142 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 19:38:10 crc kubenswrapper[4775]: I1125 19:38:10.338730 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 19:38:10 crc kubenswrapper[4775]: I1125 19:38:10.338852 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 19:38:10 crc kubenswrapper[4775]: I1125 19:38:10.386694 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 19:38:10 crc kubenswrapper[4775]: I1125 19:38:10.441134 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 19:38:10 crc kubenswrapper[4775]: I1125 19:38:10.676102 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 19:38:10 crc kubenswrapper[4775]: I1125 19:38:10.740003 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.020789 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.030274 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.162074 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.180184 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.239668 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.322818 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.441526 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.443009 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.587597 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.606894 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.639705 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.711801 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.871307 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 19:38:11 crc kubenswrapper[4775]: I1125 19:38:11.974505 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.065870 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.169511 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.303069 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.438767 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.551605 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.559555 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.573527 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.617965 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.651589 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.676775 4775 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.784457 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.888064 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.970721 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 19:38:12 crc kubenswrapper[4775]: I1125 19:38:12.975391 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.012246 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.155700 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.268577 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.313912 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.335762 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.343428 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.401504 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.495560 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.634381 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.708023 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.718818 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.815438 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.858606 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.901598 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 19:38:13 crc kubenswrapper[4775]: I1125 19:38:13.916567 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.003745 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.112733 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.126006 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.153808 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.155264 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.202434 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.208084 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.304812 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.316586 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.352433 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.372089 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.392084 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.457235 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.530185 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.550697 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.552493 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.604908 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.618661 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.651159 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.740354 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 19:38:14 crc kubenswrapper[4775]: I1125 19:38:14.809968 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:14.841365 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:14.890253 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:14.891630 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:14.893546 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:14.904269 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:14.938756 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:14.975202 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.103070 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.111007 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.228072 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.243534 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.258248 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.280351 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.367945 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.402070 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.480271 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.613302 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.725954 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.828219 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.853195 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.860176 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.862971 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.898927 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.919455 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 19:38:15 crc kubenswrapper[4775]: I1125 19:38:15.978061 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.005333 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.056192 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.178758 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.198205 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.323887 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.369106 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.442743 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.445500 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.596217 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.665719 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.707390 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.718860 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.753502 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.785354 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.802464 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.850772 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.863361 4775 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.895185 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.903112 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.908477 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.942293 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 19:38:16 crc kubenswrapper[4775]: I1125 19:38:16.965812 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.021554 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.023701 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.069861 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.159390 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.178831 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.258517 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.282632 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.390434 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.460369 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.564006 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.610976 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.713345 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.748877 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.781950 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.861839 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.883567 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.907376 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 19:38:17 crc kubenswrapper[4775]: I1125 19:38:17.967373 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.027159 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.062610 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.081923 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.104159 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.112043 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.114341 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.150200 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.186506 4775 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.205830 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.260619 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.390435 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.419218 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.420588 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.653075 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.659121 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.687997 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.705143 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.706356 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.707282 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.777528 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.785292 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.804522 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.900342 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.908129 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 19:38:18 crc kubenswrapper[4775]: I1125 19:38:18.932738 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.169444 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.209557 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.211820 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.294268 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.320076 4775 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.324576 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.331817 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.333607 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.334518 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.385918 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.439757 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.481723 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.492273 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.537443 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.568512 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.820641 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.856979 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.884970 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.913833 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 19:38:19 crc kubenswrapper[4775]: I1125 19:38:19.944123 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.081022 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.086904 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.097242 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.125574 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.127443 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.170988 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.191718 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.221399 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.273216 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.324811 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.331457 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.356737 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.463925 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.637297 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.796134 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.806040 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.822579 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.917375 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.917995 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 19:38:20 crc kubenswrapper[4775]: I1125 19:38:20.946747 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 19:38:21 crc kubenswrapper[4775]: I1125 19:38:21.372997 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 19:38:21 crc kubenswrapper[4775]: I1125 19:38:21.516234 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 19:38:21 crc kubenswrapper[4775]: I1125 19:38:21.545374 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 19:38:21 crc kubenswrapper[4775]: I1125 19:38:21.565836 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 19:38:21 crc kubenswrapper[4775]: I1125 19:38:21.578084 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 19:38:21 crc kubenswrapper[4775]: I1125 19:38:21.596172 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 19:38:21 crc kubenswrapper[4775]: I1125 19:38:21.695609 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 19:38:21 crc kubenswrapper[4775]: I1125 19:38:21.736452 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 19:38:21 crc kubenswrapper[4775]: I1125 19:38:21.799457 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 19:38:21 crc kubenswrapper[4775]: I1125 19:38:21.947370 4775 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 19:38:21 crc kubenswrapper[4775]: I1125 19:38:21.949722 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 19:38:22 crc kubenswrapper[4775]: I1125 19:38:22.024160 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 19:38:22 crc kubenswrapper[4775]: I1125 19:38:22.040028 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 19:38:22 crc kubenswrapper[4775]: I1125 19:38:22.205992 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 19:38:22 crc kubenswrapper[4775]: I1125 19:38:22.207703 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 19:38:22 crc kubenswrapper[4775]: I1125 19:38:22.234080 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 19:38:22 crc kubenswrapper[4775]: I1125 19:38:22.303192 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 19:38:22 crc kubenswrapper[4775]: I1125 19:38:22.330803 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 19:38:22 crc kubenswrapper[4775]: I1125 19:38:22.437496 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 19:38:22 crc kubenswrapper[4775]: I1125 19:38:22.509293 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 19:38:22 crc kubenswrapper[4775]: I1125 19:38:22.804155 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 19:38:22 crc kubenswrapper[4775]: I1125 19:38:22.984748 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.035343 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.057547 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.065405 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.079317 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.083002 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.128468 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.379744 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.501333 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.594598 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.628089 4775 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.628945 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5dc57f868f-tkjdt" podStartSLOduration=84.628771135 podStartE2EDuration="1m24.628771135s" podCreationTimestamp="2025-11-25 19:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:37:57.558049459 +0000 UTC m=+259.474411825" watchObservedRunningTime="2025-11-25 19:38:23.628771135 +0000 UTC m=+285.545133541" Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.634481 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.634545 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.643331 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.660803 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.660779934 podStartE2EDuration="26.660779934s" podCreationTimestamp="2025-11-25 19:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:38:23.656255941 +0000 UTC m=+285.572618357" watchObservedRunningTime="2025-11-25 19:38:23.660779934 +0000 UTC m=+285.577142300" Nov 25 19:38:23 crc kubenswrapper[4775]: I1125 19:38:23.990188 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 19:38:24 crc kubenswrapper[4775]: I1125 19:38:24.034226 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 19:38:24 crc kubenswrapper[4775]: I1125 19:38:24.097542 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 19:38:24 crc kubenswrapper[4775]: I1125 19:38:24.187896 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 19:38:25 crc kubenswrapper[4775]: I1125 19:38:25.017449 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 19:38:26 crc kubenswrapper[4775]: I1125 19:38:26.214043 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 19:38:31 crc kubenswrapper[4775]: I1125 19:38:31.551793 4775 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 19:38:31 crc kubenswrapper[4775]: I1125 19:38:31.552928 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://47888a17ba4af6358edd03635f15ec3e88ad4577b6aa02bea7a4c154bff78b6d" gracePeriod=5 Nov 25 19:38:36 crc kubenswrapper[4775]: I1125 19:38:36.903507 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 19:38:36 crc kubenswrapper[4775]: I1125 19:38:36.904112 4775 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="47888a17ba4af6358edd03635f15ec3e88ad4577b6aa02bea7a4c154bff78b6d" exitCode=137 Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.164927 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.165023 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.295047 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.295114 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.295144 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.295180 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.295207 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.295289 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.295314 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.295389 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.295460 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.295726 4775 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.295747 4775 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.295759 4775 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.295771 4775 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.306897 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.396850 4775 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.912695 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.913790 4775 scope.go:117] "RemoveContainer" containerID="47888a17ba4af6358edd03635f15ec3e88ad4577b6aa02bea7a4c154bff78b6d" Nov 25 19:38:37 crc kubenswrapper[4775]: I1125 19:38:37.913902 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 19:38:38 crc kubenswrapper[4775]: I1125 19:38:38.860133 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 25 19:38:39 crc kubenswrapper[4775]: I1125 19:38:39.930278 4775 generic.go:334] "Generic (PLEG): container finished" podID="3566ef9c-3d80-480e-b069-1ff60753877f" containerID="faf8caab22e1737baddc8abc010b031989665f95cfcfad0880cff713cf4399c1" exitCode=0 Nov 25 19:38:39 crc kubenswrapper[4775]: I1125 19:38:39.930685 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" event={"ID":"3566ef9c-3d80-480e-b069-1ff60753877f","Type":"ContainerDied","Data":"faf8caab22e1737baddc8abc010b031989665f95cfcfad0880cff713cf4399c1"} Nov 25 19:38:39 crc kubenswrapper[4775]: I1125 19:38:39.931813 4775 scope.go:117] "RemoveContainer" containerID="faf8caab22e1737baddc8abc010b031989665f95cfcfad0880cff713cf4399c1" Nov 25 19:38:40 crc kubenswrapper[4775]: I1125 19:38:40.938313 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" event={"ID":"3566ef9c-3d80-480e-b069-1ff60753877f","Type":"ContainerStarted","Data":"d7cb6d42003dc5b3be234d14573df5a22421cd58659236519e473264062b62b9"} Nov 25 19:38:40 crc kubenswrapper[4775]: I1125 19:38:40.939602 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:38:40 crc kubenswrapper[4775]: I1125 19:38:40.942264 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.106516 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pdqsx"] Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.107450 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" podUID="0d24c230-e34e-4509-bba0-86d680714e25" containerName="controller-manager" containerID="cri-o://e1612604b5571db1c546da3e63c1734418e4c3007cc1687f7e91ce2aa499bfa3" gracePeriod=30 Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.209950 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5"] Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.210407 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" podUID="6865cd6d-f340-4084-9efe-388f7744d93a" containerName="route-controller-manager" containerID="cri-o://6799fecd68c066f9f30931a4adeacb58c4e24644071e575dcd08a70e7188cf85" gracePeriod=30 Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.485624 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.581882 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.655975 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-proxy-ca-bundles\") pod \"0d24c230-e34e-4509-bba0-86d680714e25\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.656033 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tclqb\" (UniqueName: \"kubernetes.io/projected/0d24c230-e34e-4509-bba0-86d680714e25-kube-api-access-tclqb\") pod \"0d24c230-e34e-4509-bba0-86d680714e25\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.656063 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-client-ca\") pod \"0d24c230-e34e-4509-bba0-86d680714e25\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.656098 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d24c230-e34e-4509-bba0-86d680714e25-serving-cert\") pod \"0d24c230-e34e-4509-bba0-86d680714e25\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.656136 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-config\") pod \"0d24c230-e34e-4509-bba0-86d680714e25\" (UID: \"0d24c230-e34e-4509-bba0-86d680714e25\") " Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.657239 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-client-ca" (OuterVolumeSpecName: "client-ca") pod "0d24c230-e34e-4509-bba0-86d680714e25" (UID: "0d24c230-e34e-4509-bba0-86d680714e25"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.657309 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-config" (OuterVolumeSpecName: "config") pod "0d24c230-e34e-4509-bba0-86d680714e25" (UID: "0d24c230-e34e-4509-bba0-86d680714e25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.657258 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0d24c230-e34e-4509-bba0-86d680714e25" (UID: "0d24c230-e34e-4509-bba0-86d680714e25"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.662893 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d24c230-e34e-4509-bba0-86d680714e25-kube-api-access-tclqb" (OuterVolumeSpecName: "kube-api-access-tclqb") pod "0d24c230-e34e-4509-bba0-86d680714e25" (UID: "0d24c230-e34e-4509-bba0-86d680714e25"). InnerVolumeSpecName "kube-api-access-tclqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.663641 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d24c230-e34e-4509-bba0-86d680714e25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0d24c230-e34e-4509-bba0-86d680714e25" (UID: "0d24c230-e34e-4509-bba0-86d680714e25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.756878 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6865cd6d-f340-4084-9efe-388f7744d93a-serving-cert\") pod \"6865cd6d-f340-4084-9efe-388f7744d93a\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.757023 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6865cd6d-f340-4084-9efe-388f7744d93a-client-ca\") pod \"6865cd6d-f340-4084-9efe-388f7744d93a\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.757059 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6865cd6d-f340-4084-9efe-388f7744d93a-config\") pod \"6865cd6d-f340-4084-9efe-388f7744d93a\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.757114 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltsxt\" (UniqueName: \"kubernetes.io/projected/6865cd6d-f340-4084-9efe-388f7744d93a-kube-api-access-ltsxt\") pod \"6865cd6d-f340-4084-9efe-388f7744d93a\" (UID: \"6865cd6d-f340-4084-9efe-388f7744d93a\") " Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.757521 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.757556 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.757577 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tclqb\" (UniqueName: \"kubernetes.io/projected/0d24c230-e34e-4509-bba0-86d680714e25-kube-api-access-tclqb\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.757593 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d24c230-e34e-4509-bba0-86d680714e25-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.757607 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d24c230-e34e-4509-bba0-86d680714e25-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.758204 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6865cd6d-f340-4084-9efe-388f7744d93a-config" (OuterVolumeSpecName: "config") pod "6865cd6d-f340-4084-9efe-388f7744d93a" (UID: "6865cd6d-f340-4084-9efe-388f7744d93a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.758251 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6865cd6d-f340-4084-9efe-388f7744d93a-client-ca" (OuterVolumeSpecName: "client-ca") pod "6865cd6d-f340-4084-9efe-388f7744d93a" (UID: "6865cd6d-f340-4084-9efe-388f7744d93a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.761347 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6865cd6d-f340-4084-9efe-388f7744d93a-kube-api-access-ltsxt" (OuterVolumeSpecName: "kube-api-access-ltsxt") pod "6865cd6d-f340-4084-9efe-388f7744d93a" (UID: "6865cd6d-f340-4084-9efe-388f7744d93a"). InnerVolumeSpecName "kube-api-access-ltsxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.762130 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6865cd6d-f340-4084-9efe-388f7744d93a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6865cd6d-f340-4084-9efe-388f7744d93a" (UID: "6865cd6d-f340-4084-9efe-388f7744d93a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.860420 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6865cd6d-f340-4084-9efe-388f7744d93a-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.860465 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6865cd6d-f340-4084-9efe-388f7744d93a-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.860481 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltsxt\" (UniqueName: \"kubernetes.io/projected/6865cd6d-f340-4084-9efe-388f7744d93a-kube-api-access-ltsxt\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:47 crc kubenswrapper[4775]: I1125 19:38:47.860494 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6865cd6d-f340-4084-9efe-388f7744d93a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.010413 4775 generic.go:334] "Generic (PLEG): container finished" podID="6865cd6d-f340-4084-9efe-388f7744d93a" containerID="6799fecd68c066f9f30931a4adeacb58c4e24644071e575dcd08a70e7188cf85" exitCode=0 Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.010555 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.010498 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" event={"ID":"6865cd6d-f340-4084-9efe-388f7744d93a","Type":"ContainerDied","Data":"6799fecd68c066f9f30931a4adeacb58c4e24644071e575dcd08a70e7188cf85"} Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.010768 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5" event={"ID":"6865cd6d-f340-4084-9efe-388f7744d93a","Type":"ContainerDied","Data":"ca9a5dbc39c9a99509b889e607ed5118e026751b5f1389005fd5932a3cfaed82"} Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.010825 4775 scope.go:117] "RemoveContainer" containerID="6799fecd68c066f9f30931a4adeacb58c4e24644071e575dcd08a70e7188cf85" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.016910 4775 generic.go:334] "Generic (PLEG): container finished" podID="0d24c230-e34e-4509-bba0-86d680714e25" containerID="e1612604b5571db1c546da3e63c1734418e4c3007cc1687f7e91ce2aa499bfa3" exitCode=0 Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.016953 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" event={"ID":"0d24c230-e34e-4509-bba0-86d680714e25","Type":"ContainerDied","Data":"e1612604b5571db1c546da3e63c1734418e4c3007cc1687f7e91ce2aa499bfa3"} Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.016980 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" event={"ID":"0d24c230-e34e-4509-bba0-86d680714e25","Type":"ContainerDied","Data":"6c992a71789d1cd176b7bef9e890f516f0a1e33d4ca1a6c0067dc7279c7b72fb"} Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.017063 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pdqsx" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.054969 4775 scope.go:117] "RemoveContainer" containerID="6799fecd68c066f9f30931a4adeacb58c4e24644071e575dcd08a70e7188cf85" Nov 25 19:38:48 crc kubenswrapper[4775]: E1125 19:38:48.057622 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6799fecd68c066f9f30931a4adeacb58c4e24644071e575dcd08a70e7188cf85\": container with ID starting with 6799fecd68c066f9f30931a4adeacb58c4e24644071e575dcd08a70e7188cf85 not found: ID does not exist" containerID="6799fecd68c066f9f30931a4adeacb58c4e24644071e575dcd08a70e7188cf85" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.057930 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6799fecd68c066f9f30931a4adeacb58c4e24644071e575dcd08a70e7188cf85"} err="failed to get container status \"6799fecd68c066f9f30931a4adeacb58c4e24644071e575dcd08a70e7188cf85\": rpc error: code = NotFound desc = could not find container \"6799fecd68c066f9f30931a4adeacb58c4e24644071e575dcd08a70e7188cf85\": container with ID starting with 6799fecd68c066f9f30931a4adeacb58c4e24644071e575dcd08a70e7188cf85 not found: ID does not exist" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.058128 4775 scope.go:117] "RemoveContainer" containerID="e1612604b5571db1c546da3e63c1734418e4c3007cc1687f7e91ce2aa499bfa3" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.083449 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pdqsx"] Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.088638 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pdqsx"] Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.090772 4775 scope.go:117] "RemoveContainer" containerID="e1612604b5571db1c546da3e63c1734418e4c3007cc1687f7e91ce2aa499bfa3" Nov 25 19:38:48 crc kubenswrapper[4775]: E1125 19:38:48.091678 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1612604b5571db1c546da3e63c1734418e4c3007cc1687f7e91ce2aa499bfa3\": container with ID starting with e1612604b5571db1c546da3e63c1734418e4c3007cc1687f7e91ce2aa499bfa3 not found: ID does not exist" containerID="e1612604b5571db1c546da3e63c1734418e4c3007cc1687f7e91ce2aa499bfa3" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.091882 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1612604b5571db1c546da3e63c1734418e4c3007cc1687f7e91ce2aa499bfa3"} err="failed to get container status \"e1612604b5571db1c546da3e63c1734418e4c3007cc1687f7e91ce2aa499bfa3\": rpc error: code = NotFound desc = could not find container \"e1612604b5571db1c546da3e63c1734418e4c3007cc1687f7e91ce2aa499bfa3\": container with ID starting with e1612604b5571db1c546da3e63c1734418e4c3007cc1687f7e91ce2aa499bfa3 not found: ID does not exist" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.094548 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5"] Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.099244 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw9d5"] Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.479287 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f"] Nov 25 19:38:48 crc kubenswrapper[4775]: E1125 19:38:48.479699 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103567fc-c7f2-4a0a-ba9b-674148cbda9f" containerName="installer" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.479745 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="103567fc-c7f2-4a0a-ba9b-674148cbda9f" containerName="installer" Nov 25 19:38:48 crc kubenswrapper[4775]: E1125 19:38:48.479762 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6865cd6d-f340-4084-9efe-388f7744d93a" containerName="route-controller-manager" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.479775 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6865cd6d-f340-4084-9efe-388f7744d93a" containerName="route-controller-manager" Nov 25 19:38:48 crc kubenswrapper[4775]: E1125 19:38:48.479791 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.479804 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 19:38:48 crc kubenswrapper[4775]: E1125 19:38:48.479837 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d24c230-e34e-4509-bba0-86d680714e25" containerName="controller-manager" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.479849 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d24c230-e34e-4509-bba0-86d680714e25" containerName="controller-manager" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.480024 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d24c230-e34e-4509-bba0-86d680714e25" containerName="controller-manager" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.480039 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.480054 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="6865cd6d-f340-4084-9efe-388f7744d93a" containerName="route-controller-manager" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.480075 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="103567fc-c7f2-4a0a-ba9b-674148cbda9f" containerName="installer" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.480728 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.483868 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.484284 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.484876 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.485918 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.486155 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.486366 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.487490 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8fbc96864-bkw5j"] Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.488718 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.497533 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f"] Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.497963 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.498733 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.498827 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.499025 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.499153 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.502353 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.503356 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8fbc96864-bkw5j"] Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.510580 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.573345 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-proxy-ca-bundles\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.573521 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znmv4\" (UniqueName: \"kubernetes.io/projected/e0485668-3034-4e59-8a04-96e639f2736f-kube-api-access-znmv4\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.573588 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2520d86e-e327-4e5d-8c7b-3864f470c10f-config\") pod \"route-controller-manager-76d7476964-r5l5f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.573635 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0485668-3034-4e59-8a04-96e639f2736f-serving-cert\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.573750 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-config\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.573792 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2520d86e-e327-4e5d-8c7b-3864f470c10f-serving-cert\") pod \"route-controller-manager-76d7476964-r5l5f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.573825 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmwp5\" (UniqueName: \"kubernetes.io/projected/2520d86e-e327-4e5d-8c7b-3864f470c10f-kube-api-access-kmwp5\") pod \"route-controller-manager-76d7476964-r5l5f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.573857 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2520d86e-e327-4e5d-8c7b-3864f470c10f-client-ca\") pod \"route-controller-manager-76d7476964-r5l5f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.573914 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-client-ca\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.674862 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-proxy-ca-bundles\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.674952 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znmv4\" (UniqueName: \"kubernetes.io/projected/e0485668-3034-4e59-8a04-96e639f2736f-kube-api-access-znmv4\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.674994 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2520d86e-e327-4e5d-8c7b-3864f470c10f-config\") pod \"route-controller-manager-76d7476964-r5l5f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.675031 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0485668-3034-4e59-8a04-96e639f2736f-serving-cert\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.675061 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-config\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.675097 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2520d86e-e327-4e5d-8c7b-3864f470c10f-serving-cert\") pod \"route-controller-manager-76d7476964-r5l5f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.675132 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmwp5\" (UniqueName: \"kubernetes.io/projected/2520d86e-e327-4e5d-8c7b-3864f470c10f-kube-api-access-kmwp5\") pod \"route-controller-manager-76d7476964-r5l5f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.675178 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2520d86e-e327-4e5d-8c7b-3864f470c10f-client-ca\") pod \"route-controller-manager-76d7476964-r5l5f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.675207 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-client-ca\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.676433 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-client-ca\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.680792 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2520d86e-e327-4e5d-8c7b-3864f470c10f-client-ca\") pod \"route-controller-manager-76d7476964-r5l5f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.685465 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-proxy-ca-bundles\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.685963 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-config\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.688628 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2520d86e-e327-4e5d-8c7b-3864f470c10f-serving-cert\") pod \"route-controller-manager-76d7476964-r5l5f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.708631 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmwp5\" (UniqueName: \"kubernetes.io/projected/2520d86e-e327-4e5d-8c7b-3864f470c10f-kube-api-access-kmwp5\") pod \"route-controller-manager-76d7476964-r5l5f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.709147 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0485668-3034-4e59-8a04-96e639f2736f-serving-cert\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.710833 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znmv4\" (UniqueName: \"kubernetes.io/projected/e0485668-3034-4e59-8a04-96e639f2736f-kube-api-access-znmv4\") pod \"controller-manager-8fbc96864-bkw5j\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.713830 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2520d86e-e327-4e5d-8c7b-3864f470c10f-config\") pod \"route-controller-manager-76d7476964-r5l5f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.811189 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.828221 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.872006 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d24c230-e34e-4509-bba0-86d680714e25" path="/var/lib/kubelet/pods/0d24c230-e34e-4509-bba0-86d680714e25/volumes" Nov 25 19:38:48 crc kubenswrapper[4775]: I1125 19:38:48.873033 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6865cd6d-f340-4084-9efe-388f7744d93a" path="/var/lib/kubelet/pods/6865cd6d-f340-4084-9efe-388f7744d93a/volumes" Nov 25 19:38:49 crc kubenswrapper[4775]: I1125 19:38:49.064631 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f"] Nov 25 19:38:49 crc kubenswrapper[4775]: I1125 19:38:49.312950 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8fbc96864-bkw5j"] Nov 25 19:38:50 crc kubenswrapper[4775]: I1125 19:38:50.036569 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" event={"ID":"2520d86e-e327-4e5d-8c7b-3864f470c10f","Type":"ContainerStarted","Data":"1d4931b38294a762cb309f60386f10faf85c2b8dd78b4ea1c1babc3f0cd454ef"} Nov 25 19:38:50 crc kubenswrapper[4775]: I1125 19:38:50.037304 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:50 crc kubenswrapper[4775]: I1125 19:38:50.037318 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" event={"ID":"2520d86e-e327-4e5d-8c7b-3864f470c10f","Type":"ContainerStarted","Data":"96c3f9fa78e39a5d60902984481aeb0ac630e22c9f1788eb237a2ddc7c4e6182"} Nov 25 19:38:50 crc kubenswrapper[4775]: I1125 19:38:50.040821 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" event={"ID":"e0485668-3034-4e59-8a04-96e639f2736f","Type":"ContainerStarted","Data":"2e59c30b881ed9c12da34cdc1379ccb355576ecb36689cde21111c0937436aff"} Nov 25 19:38:50 crc kubenswrapper[4775]: I1125 19:38:50.040877 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" event={"ID":"e0485668-3034-4e59-8a04-96e639f2736f","Type":"ContainerStarted","Data":"b11e9e5dcb306bedba88b8408211b15f578faad7ad5375a6afd928f814a793a8"} Nov 25 19:38:50 crc kubenswrapper[4775]: I1125 19:38:50.041211 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:50 crc kubenswrapper[4775]: I1125 19:38:50.045686 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:38:50 crc kubenswrapper[4775]: I1125 19:38:50.047960 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:38:50 crc kubenswrapper[4775]: I1125 19:38:50.065382 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" podStartSLOduration=3.065362303 podStartE2EDuration="3.065362303s" podCreationTimestamp="2025-11-25 19:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:38:50.060003089 +0000 UTC m=+311.976365465" watchObservedRunningTime="2025-11-25 19:38:50.065362303 +0000 UTC m=+311.981724679" Nov 25 19:38:50 crc kubenswrapper[4775]: I1125 19:38:50.097947 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" podStartSLOduration=3.097924477 podStartE2EDuration="3.097924477s" podCreationTimestamp="2025-11-25 19:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:38:50.080790459 +0000 UTC m=+311.997152865" watchObservedRunningTime="2025-11-25 19:38:50.097924477 +0000 UTC m=+312.014286843" Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.297639 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8fbc96864-bkw5j"] Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.299006 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" podUID="e0485668-3034-4e59-8a04-96e639f2736f" containerName="controller-manager" containerID="cri-o://2e59c30b881ed9c12da34cdc1379ccb355576ecb36689cde21111c0937436aff" gracePeriod=30 Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.379948 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f"] Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.380263 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" podUID="2520d86e-e327-4e5d-8c7b-3864f470c10f" containerName="route-controller-manager" containerID="cri-o://1d4931b38294a762cb309f60386f10faf85c2b8dd78b4ea1c1babc3f0cd454ef" gracePeriod=30 Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.872071 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.921571 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.959134 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-787df948cc-j8rwc"] Nov 25 19:39:00 crc kubenswrapper[4775]: E1125 19:39:00.959438 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2520d86e-e327-4e5d-8c7b-3864f470c10f" containerName="route-controller-manager" Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.959458 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2520d86e-e327-4e5d-8c7b-3864f470c10f" containerName="route-controller-manager" Nov 25 19:39:00 crc kubenswrapper[4775]: E1125 19:39:00.959472 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0485668-3034-4e59-8a04-96e639f2736f" containerName="controller-manager" Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.959480 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0485668-3034-4e59-8a04-96e639f2736f" containerName="controller-manager" Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.959591 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2520d86e-e327-4e5d-8c7b-3864f470c10f" containerName="route-controller-manager" Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.959614 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0485668-3034-4e59-8a04-96e639f2736f" containerName="controller-manager" Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.960093 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.979971 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-787df948cc-j8rwc"] Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.986486 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-client-ca\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.986562 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5decc231-d109-410b-8ddd-539447b150f9-serving-cert\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.986624 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-proxy-ca-bundles\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.986685 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-config\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:00 crc kubenswrapper[4775]: I1125 19:39:00.986710 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkdbz\" (UniqueName: \"kubernetes.io/projected/5decc231-d109-410b-8ddd-539447b150f9-kube-api-access-bkdbz\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.056573 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp"] Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.058737 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.071358 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp"] Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.087714 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-proxy-ca-bundles\") pod \"e0485668-3034-4e59-8a04-96e639f2736f\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.087776 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znmv4\" (UniqueName: \"kubernetes.io/projected/e0485668-3034-4e59-8a04-96e639f2736f-kube-api-access-znmv4\") pod \"e0485668-3034-4e59-8a04-96e639f2736f\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.087830 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-config\") pod \"e0485668-3034-4e59-8a04-96e639f2736f\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.087860 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-client-ca\") pod \"e0485668-3034-4e59-8a04-96e639f2736f\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.087888 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2520d86e-e327-4e5d-8c7b-3864f470c10f-client-ca\") pod \"2520d86e-e327-4e5d-8c7b-3864f470c10f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.087928 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmwp5\" (UniqueName: \"kubernetes.io/projected/2520d86e-e327-4e5d-8c7b-3864f470c10f-kube-api-access-kmwp5\") pod \"2520d86e-e327-4e5d-8c7b-3864f470c10f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.087958 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2520d86e-e327-4e5d-8c7b-3864f470c10f-config\") pod \"2520d86e-e327-4e5d-8c7b-3864f470c10f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.087985 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0485668-3034-4e59-8a04-96e639f2736f-serving-cert\") pod \"e0485668-3034-4e59-8a04-96e639f2736f\" (UID: \"e0485668-3034-4e59-8a04-96e639f2736f\") " Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.088011 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2520d86e-e327-4e5d-8c7b-3864f470c10f-serving-cert\") pod \"2520d86e-e327-4e5d-8c7b-3864f470c10f\" (UID: \"2520d86e-e327-4e5d-8c7b-3864f470c10f\") " Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.088100 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba721fca-b29a-4157-bc28-8c07078210e0-serving-cert\") pod \"route-controller-manager-7f9d8fdb97-4g7xp\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.088147 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-proxy-ca-bundles\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.088352 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-config\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.088377 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkdbz\" (UniqueName: \"kubernetes.io/projected/5decc231-d109-410b-8ddd-539447b150f9-kube-api-access-bkdbz\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.088415 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-client-ca\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.088450 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5decc231-d109-410b-8ddd-539447b150f9-serving-cert\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.088475 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba721fca-b29a-4157-bc28-8c07078210e0-config\") pod \"route-controller-manager-7f9d8fdb97-4g7xp\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.088495 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba721fca-b29a-4157-bc28-8c07078210e0-client-ca\") pod \"route-controller-manager-7f9d8fdb97-4g7xp\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.088526 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkmc5\" (UniqueName: \"kubernetes.io/projected/ba721fca-b29a-4157-bc28-8c07078210e0-kube-api-access-vkmc5\") pod \"route-controller-manager-7f9d8fdb97-4g7xp\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.088881 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e0485668-3034-4e59-8a04-96e639f2736f" (UID: "e0485668-3034-4e59-8a04-96e639f2736f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.089807 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2520d86e-e327-4e5d-8c7b-3864f470c10f-client-ca" (OuterVolumeSpecName: "client-ca") pod "2520d86e-e327-4e5d-8c7b-3864f470c10f" (UID: "2520d86e-e327-4e5d-8c7b-3864f470c10f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.090072 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-client-ca\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.090079 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-proxy-ca-bundles\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.090405 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-client-ca" (OuterVolumeSpecName: "client-ca") pod "e0485668-3034-4e59-8a04-96e639f2736f" (UID: "e0485668-3034-4e59-8a04-96e639f2736f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.090771 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-config" (OuterVolumeSpecName: "config") pod "e0485668-3034-4e59-8a04-96e639f2736f" (UID: "e0485668-3034-4e59-8a04-96e639f2736f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.090960 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-config\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.091204 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2520d86e-e327-4e5d-8c7b-3864f470c10f-config" (OuterVolumeSpecName: "config") pod "2520d86e-e327-4e5d-8c7b-3864f470c10f" (UID: "2520d86e-e327-4e5d-8c7b-3864f470c10f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.094872 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5decc231-d109-410b-8ddd-539447b150f9-serving-cert\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.102991 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2520d86e-e327-4e5d-8c7b-3864f470c10f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2520d86e-e327-4e5d-8c7b-3864f470c10f" (UID: "2520d86e-e327-4e5d-8c7b-3864f470c10f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.103216 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2520d86e-e327-4e5d-8c7b-3864f470c10f-kube-api-access-kmwp5" (OuterVolumeSpecName: "kube-api-access-kmwp5") pod "2520d86e-e327-4e5d-8c7b-3864f470c10f" (UID: "2520d86e-e327-4e5d-8c7b-3864f470c10f"). InnerVolumeSpecName "kube-api-access-kmwp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.103586 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0485668-3034-4e59-8a04-96e639f2736f-kube-api-access-znmv4" (OuterVolumeSpecName: "kube-api-access-znmv4") pod "e0485668-3034-4e59-8a04-96e639f2736f" (UID: "e0485668-3034-4e59-8a04-96e639f2736f"). InnerVolumeSpecName "kube-api-access-znmv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.107244 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkdbz\" (UniqueName: \"kubernetes.io/projected/5decc231-d109-410b-8ddd-539447b150f9-kube-api-access-bkdbz\") pod \"controller-manager-787df948cc-j8rwc\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.109528 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0485668-3034-4e59-8a04-96e639f2736f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e0485668-3034-4e59-8a04-96e639f2736f" (UID: "e0485668-3034-4e59-8a04-96e639f2736f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.119863 4775 generic.go:334] "Generic (PLEG): container finished" podID="e0485668-3034-4e59-8a04-96e639f2736f" containerID="2e59c30b881ed9c12da34cdc1379ccb355576ecb36689cde21111c0937436aff" exitCode=0 Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.119949 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" event={"ID":"e0485668-3034-4e59-8a04-96e639f2736f","Type":"ContainerDied","Data":"2e59c30b881ed9c12da34cdc1379ccb355576ecb36689cde21111c0937436aff"} Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.119986 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" event={"ID":"e0485668-3034-4e59-8a04-96e639f2736f","Type":"ContainerDied","Data":"b11e9e5dcb306bedba88b8408211b15f578faad7ad5375a6afd928f814a793a8"} Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.120009 4775 scope.go:117] "RemoveContainer" containerID="2e59c30b881ed9c12da34cdc1379ccb355576ecb36689cde21111c0937436aff" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.120158 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8fbc96864-bkw5j" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.131527 4775 generic.go:334] "Generic (PLEG): container finished" podID="2520d86e-e327-4e5d-8c7b-3864f470c10f" containerID="1d4931b38294a762cb309f60386f10faf85c2b8dd78b4ea1c1babc3f0cd454ef" exitCode=0 Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.131584 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" event={"ID":"2520d86e-e327-4e5d-8c7b-3864f470c10f","Type":"ContainerDied","Data":"1d4931b38294a762cb309f60386f10faf85c2b8dd78b4ea1c1babc3f0cd454ef"} Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.131623 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" event={"ID":"2520d86e-e327-4e5d-8c7b-3864f470c10f","Type":"ContainerDied","Data":"96c3f9fa78e39a5d60902984481aeb0ac630e22c9f1788eb237a2ddc7c4e6182"} Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.131671 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.149435 4775 scope.go:117] "RemoveContainer" containerID="2e59c30b881ed9c12da34cdc1379ccb355576ecb36689cde21111c0937436aff" Nov 25 19:39:01 crc kubenswrapper[4775]: E1125 19:39:01.150040 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e59c30b881ed9c12da34cdc1379ccb355576ecb36689cde21111c0937436aff\": container with ID starting with 2e59c30b881ed9c12da34cdc1379ccb355576ecb36689cde21111c0937436aff not found: ID does not exist" containerID="2e59c30b881ed9c12da34cdc1379ccb355576ecb36689cde21111c0937436aff" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.150068 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e59c30b881ed9c12da34cdc1379ccb355576ecb36689cde21111c0937436aff"} err="failed to get container status \"2e59c30b881ed9c12da34cdc1379ccb355576ecb36689cde21111c0937436aff\": rpc error: code = NotFound desc = could not find container \"2e59c30b881ed9c12da34cdc1379ccb355576ecb36689cde21111c0937436aff\": container with ID starting with 2e59c30b881ed9c12da34cdc1379ccb355576ecb36689cde21111c0937436aff not found: ID does not exist" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.150089 4775 scope.go:117] "RemoveContainer" containerID="1d4931b38294a762cb309f60386f10faf85c2b8dd78b4ea1c1babc3f0cd454ef" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.150148 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8fbc96864-bkw5j"] Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.155577 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8fbc96864-bkw5j"] Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.167572 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f"] Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.167635 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d7476964-r5l5f"] Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.169626 4775 scope.go:117] "RemoveContainer" containerID="1d4931b38294a762cb309f60386f10faf85c2b8dd78b4ea1c1babc3f0cd454ef" Nov 25 19:39:01 crc kubenswrapper[4775]: E1125 19:39:01.170056 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4931b38294a762cb309f60386f10faf85c2b8dd78b4ea1c1babc3f0cd454ef\": container with ID starting with 1d4931b38294a762cb309f60386f10faf85c2b8dd78b4ea1c1babc3f0cd454ef not found: ID does not exist" containerID="1d4931b38294a762cb309f60386f10faf85c2b8dd78b4ea1c1babc3f0cd454ef" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.170156 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4931b38294a762cb309f60386f10faf85c2b8dd78b4ea1c1babc3f0cd454ef"} err="failed to get container status \"1d4931b38294a762cb309f60386f10faf85c2b8dd78b4ea1c1babc3f0cd454ef\": rpc error: code = NotFound desc = could not find container \"1d4931b38294a762cb309f60386f10faf85c2b8dd78b4ea1c1babc3f0cd454ef\": container with ID starting with 1d4931b38294a762cb309f60386f10faf85c2b8dd78b4ea1c1babc3f0cd454ef not found: ID does not exist" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.189478 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba721fca-b29a-4157-bc28-8c07078210e0-serving-cert\") pod \"route-controller-manager-7f9d8fdb97-4g7xp\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.189576 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba721fca-b29a-4157-bc28-8c07078210e0-config\") pod \"route-controller-manager-7f9d8fdb97-4g7xp\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.189595 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba721fca-b29a-4157-bc28-8c07078210e0-client-ca\") pod \"route-controller-manager-7f9d8fdb97-4g7xp\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.189615 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkmc5\" (UniqueName: \"kubernetes.io/projected/ba721fca-b29a-4157-bc28-8c07078210e0-kube-api-access-vkmc5\") pod \"route-controller-manager-7f9d8fdb97-4g7xp\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.189744 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.189760 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znmv4\" (UniqueName: \"kubernetes.io/projected/e0485668-3034-4e59-8a04-96e639f2736f-kube-api-access-znmv4\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.190061 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.190072 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0485668-3034-4e59-8a04-96e639f2736f-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.190081 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2520d86e-e327-4e5d-8c7b-3864f470c10f-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.190089 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmwp5\" (UniqueName: \"kubernetes.io/projected/2520d86e-e327-4e5d-8c7b-3864f470c10f-kube-api-access-kmwp5\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.190098 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2520d86e-e327-4e5d-8c7b-3864f470c10f-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.190107 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0485668-3034-4e59-8a04-96e639f2736f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.190115 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2520d86e-e327-4e5d-8c7b-3864f470c10f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.190760 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba721fca-b29a-4157-bc28-8c07078210e0-client-ca\") pod \"route-controller-manager-7f9d8fdb97-4g7xp\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.190975 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba721fca-b29a-4157-bc28-8c07078210e0-config\") pod \"route-controller-manager-7f9d8fdb97-4g7xp\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.192818 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba721fca-b29a-4157-bc28-8c07078210e0-serving-cert\") pod \"route-controller-manager-7f9d8fdb97-4g7xp\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.204113 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkmc5\" (UniqueName: \"kubernetes.io/projected/ba721fca-b29a-4157-bc28-8c07078210e0-kube-api-access-vkmc5\") pod \"route-controller-manager-7f9d8fdb97-4g7xp\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.280503 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.410819 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.571569 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-787df948cc-j8rwc"] Nov 25 19:39:01 crc kubenswrapper[4775]: I1125 19:39:01.881513 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp"] Nov 25 19:39:01 crc kubenswrapper[4775]: W1125 19:39:01.889529 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba721fca_b29a_4157_bc28_8c07078210e0.slice/crio-230559cfd452085f03b8dc77868cd58a2cfb7cbe70e45ea9cf6960d62e5c4230 WatchSource:0}: Error finding container 230559cfd452085f03b8dc77868cd58a2cfb7cbe70e45ea9cf6960d62e5c4230: Status 404 returned error can't find the container with id 230559cfd452085f03b8dc77868cd58a2cfb7cbe70e45ea9cf6960d62e5c4230 Nov 25 19:39:02 crc kubenswrapper[4775]: I1125 19:39:02.137629 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" event={"ID":"5decc231-d109-410b-8ddd-539447b150f9","Type":"ContainerStarted","Data":"eaee238d2dcd6f716a418d336376cbccee4ad03c39bfe23e6fc0e3bdb0dfc956"} Nov 25 19:39:02 crc kubenswrapper[4775]: I1125 19:39:02.137734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" event={"ID":"5decc231-d109-410b-8ddd-539447b150f9","Type":"ContainerStarted","Data":"27b070f6ac48844e7e47a928a7bf422c8afa52ccdc898256e492b67f493f2104"} Nov 25 19:39:02 crc kubenswrapper[4775]: I1125 19:39:02.140097 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" event={"ID":"ba721fca-b29a-4157-bc28-8c07078210e0","Type":"ContainerStarted","Data":"326c524b513a9b9a33eacf9023068631eee6b2350d763f998c10d6486afcfa79"} Nov 25 19:39:02 crc kubenswrapper[4775]: I1125 19:39:02.140137 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" event={"ID":"ba721fca-b29a-4157-bc28-8c07078210e0","Type":"ContainerStarted","Data":"230559cfd452085f03b8dc77868cd58a2cfb7cbe70e45ea9cf6960d62e5c4230"} Nov 25 19:39:02 crc kubenswrapper[4775]: I1125 19:39:02.140504 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:02 crc kubenswrapper[4775]: I1125 19:39:02.154764 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" podStartSLOduration=2.154744439 podStartE2EDuration="2.154744439s" podCreationTimestamp="2025-11-25 19:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:39:02.15157578 +0000 UTC m=+324.067938146" watchObservedRunningTime="2025-11-25 19:39:02.154744439 +0000 UTC m=+324.071106805" Nov 25 19:39:02 crc kubenswrapper[4775]: I1125 19:39:02.170318 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" podStartSLOduration=1.170298309 podStartE2EDuration="1.170298309s" podCreationTimestamp="2025-11-25 19:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:39:02.169883568 +0000 UTC m=+324.086245944" watchObservedRunningTime="2025-11-25 19:39:02.170298309 +0000 UTC m=+324.086660675" Nov 25 19:39:02 crc kubenswrapper[4775]: I1125 19:39:02.627384 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:02 crc kubenswrapper[4775]: I1125 19:39:02.858223 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2520d86e-e327-4e5d-8c7b-3864f470c10f" path="/var/lib/kubelet/pods/2520d86e-e327-4e5d-8c7b-3864f470c10f/volumes" Nov 25 19:39:02 crc kubenswrapper[4775]: I1125 19:39:02.859536 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0485668-3034-4e59-8a04-96e639f2736f" path="/var/lib/kubelet/pods/e0485668-3034-4e59-8a04-96e639f2736f/volumes" Nov 25 19:39:02 crc kubenswrapper[4775]: I1125 19:39:02.949097 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-787df948cc-j8rwc"] Nov 25 19:39:03 crc kubenswrapper[4775]: I1125 19:39:03.146392 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:03 crc kubenswrapper[4775]: I1125 19:39:03.157066 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.150374 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" podUID="5decc231-d109-410b-8ddd-539447b150f9" containerName="controller-manager" containerID="cri-o://eaee238d2dcd6f716a418d336376cbccee4ad03c39bfe23e6fc0e3bdb0dfc956" gracePeriod=30 Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.611634 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.642217 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bdd649c4d-s6vrd"] Nov 25 19:39:04 crc kubenswrapper[4775]: E1125 19:39:04.642441 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5decc231-d109-410b-8ddd-539447b150f9" containerName="controller-manager" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.642465 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5decc231-d109-410b-8ddd-539447b150f9" containerName="controller-manager" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.642560 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5decc231-d109-410b-8ddd-539447b150f9" containerName="controller-manager" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.642972 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.659297 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bdd649c4d-s6vrd"] Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.737413 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-client-ca\") pod \"5decc231-d109-410b-8ddd-539447b150f9\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.737803 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkdbz\" (UniqueName: \"kubernetes.io/projected/5decc231-d109-410b-8ddd-539447b150f9-kube-api-access-bkdbz\") pod \"5decc231-d109-410b-8ddd-539447b150f9\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.737857 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5decc231-d109-410b-8ddd-539447b150f9-serving-cert\") pod \"5decc231-d109-410b-8ddd-539447b150f9\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.737929 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-proxy-ca-bundles\") pod \"5decc231-d109-410b-8ddd-539447b150f9\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.738001 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-config\") pod \"5decc231-d109-410b-8ddd-539447b150f9\" (UID: \"5decc231-d109-410b-8ddd-539447b150f9\") " Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.738196 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28362bd-02ef-486a-b6a6-ffc342baf466-config\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.738281 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a28362bd-02ef-486a-b6a6-ffc342baf466-serving-cert\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.738328 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plr5g\" (UniqueName: \"kubernetes.io/projected/a28362bd-02ef-486a-b6a6-ffc342baf466-kube-api-access-plr5g\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.738335 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5decc231-d109-410b-8ddd-539447b150f9" (UID: "5decc231-d109-410b-8ddd-539447b150f9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.738473 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5decc231-d109-410b-8ddd-539447b150f9" (UID: "5decc231-d109-410b-8ddd-539447b150f9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.738601 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-config" (OuterVolumeSpecName: "config") pod "5decc231-d109-410b-8ddd-539447b150f9" (UID: "5decc231-d109-410b-8ddd-539447b150f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.738616 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a28362bd-02ef-486a-b6a6-ffc342baf466-proxy-ca-bundles\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.738739 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a28362bd-02ef-486a-b6a6-ffc342baf466-client-ca\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.738805 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.738823 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.738835 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5decc231-d109-410b-8ddd-539447b150f9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.744412 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5decc231-d109-410b-8ddd-539447b150f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5decc231-d109-410b-8ddd-539447b150f9" (UID: "5decc231-d109-410b-8ddd-539447b150f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.745339 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5decc231-d109-410b-8ddd-539447b150f9-kube-api-access-bkdbz" (OuterVolumeSpecName: "kube-api-access-bkdbz") pod "5decc231-d109-410b-8ddd-539447b150f9" (UID: "5decc231-d109-410b-8ddd-539447b150f9"). InnerVolumeSpecName "kube-api-access-bkdbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.839603 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a28362bd-02ef-486a-b6a6-ffc342baf466-proxy-ca-bundles\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.839707 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a28362bd-02ef-486a-b6a6-ffc342baf466-client-ca\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.839738 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28362bd-02ef-486a-b6a6-ffc342baf466-config\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.839766 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a28362bd-02ef-486a-b6a6-ffc342baf466-serving-cert\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.839785 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plr5g\" (UniqueName: \"kubernetes.io/projected/a28362bd-02ef-486a-b6a6-ffc342baf466-kube-api-access-plr5g\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.839845 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkdbz\" (UniqueName: \"kubernetes.io/projected/5decc231-d109-410b-8ddd-539447b150f9-kube-api-access-bkdbz\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.839859 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5decc231-d109-410b-8ddd-539447b150f9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.841105 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a28362bd-02ef-486a-b6a6-ffc342baf466-client-ca\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.841478 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a28362bd-02ef-486a-b6a6-ffc342baf466-proxy-ca-bundles\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.842249 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28362bd-02ef-486a-b6a6-ffc342baf466-config\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.846212 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a28362bd-02ef-486a-b6a6-ffc342baf466-serving-cert\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.863516 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plr5g\" (UniqueName: \"kubernetes.io/projected/a28362bd-02ef-486a-b6a6-ffc342baf466-kube-api-access-plr5g\") pod \"controller-manager-bdd649c4d-s6vrd\" (UID: \"a28362bd-02ef-486a-b6a6-ffc342baf466\") " pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:04 crc kubenswrapper[4775]: I1125 19:39:04.967268 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:05 crc kubenswrapper[4775]: I1125 19:39:05.156641 4775 generic.go:334] "Generic (PLEG): container finished" podID="5decc231-d109-410b-8ddd-539447b150f9" containerID="eaee238d2dcd6f716a418d336376cbccee4ad03c39bfe23e6fc0e3bdb0dfc956" exitCode=0 Nov 25 19:39:05 crc kubenswrapper[4775]: I1125 19:39:05.156784 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" event={"ID":"5decc231-d109-410b-8ddd-539447b150f9","Type":"ContainerDied","Data":"eaee238d2dcd6f716a418d336376cbccee4ad03c39bfe23e6fc0e3bdb0dfc956"} Nov 25 19:39:05 crc kubenswrapper[4775]: I1125 19:39:05.156993 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" event={"ID":"5decc231-d109-410b-8ddd-539447b150f9","Type":"ContainerDied","Data":"27b070f6ac48844e7e47a928a7bf422c8afa52ccdc898256e492b67f493f2104"} Nov 25 19:39:05 crc kubenswrapper[4775]: I1125 19:39:05.157012 4775 scope.go:117] "RemoveContainer" containerID="eaee238d2dcd6f716a418d336376cbccee4ad03c39bfe23e6fc0e3bdb0dfc956" Nov 25 19:39:05 crc kubenswrapper[4775]: I1125 19:39:05.156844 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787df948cc-j8rwc" Nov 25 19:39:05 crc kubenswrapper[4775]: I1125 19:39:05.173577 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-787df948cc-j8rwc"] Nov 25 19:39:05 crc kubenswrapper[4775]: I1125 19:39:05.178905 4775 scope.go:117] "RemoveContainer" containerID="eaee238d2dcd6f716a418d336376cbccee4ad03c39bfe23e6fc0e3bdb0dfc956" Nov 25 19:39:05 crc kubenswrapper[4775]: E1125 19:39:05.179234 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaee238d2dcd6f716a418d336376cbccee4ad03c39bfe23e6fc0e3bdb0dfc956\": container with ID starting with eaee238d2dcd6f716a418d336376cbccee4ad03c39bfe23e6fc0e3bdb0dfc956 not found: ID does not exist" containerID="eaee238d2dcd6f716a418d336376cbccee4ad03c39bfe23e6fc0e3bdb0dfc956" Nov 25 19:39:05 crc kubenswrapper[4775]: I1125 19:39:05.179272 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaee238d2dcd6f716a418d336376cbccee4ad03c39bfe23e6fc0e3bdb0dfc956"} err="failed to get container status \"eaee238d2dcd6f716a418d336376cbccee4ad03c39bfe23e6fc0e3bdb0dfc956\": rpc error: code = NotFound desc = could not find container \"eaee238d2dcd6f716a418d336376cbccee4ad03c39bfe23e6fc0e3bdb0dfc956\": container with ID starting with eaee238d2dcd6f716a418d336376cbccee4ad03c39bfe23e6fc0e3bdb0dfc956 not found: ID does not exist" Nov 25 19:39:05 crc kubenswrapper[4775]: I1125 19:39:05.183843 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bdd649c4d-s6vrd"] Nov 25 19:39:05 crc kubenswrapper[4775]: I1125 19:39:05.184219 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-787df948cc-j8rwc"] Nov 25 19:39:06 crc kubenswrapper[4775]: I1125 19:39:06.182480 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" event={"ID":"a28362bd-02ef-486a-b6a6-ffc342baf466","Type":"ContainerStarted","Data":"a71a224c613e62a0893e4e8dad8c5ab933e409c3c99ea2238b698dee6e8282ea"} Nov 25 19:39:06 crc kubenswrapper[4775]: I1125 19:39:06.183030 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" event={"ID":"a28362bd-02ef-486a-b6a6-ffc342baf466","Type":"ContainerStarted","Data":"61897db76478d294d195327ff493848cdc6c8a6d791596b58968f3035839c4e3"} Nov 25 19:39:06 crc kubenswrapper[4775]: I1125 19:39:06.184550 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:06 crc kubenswrapper[4775]: I1125 19:39:06.193946 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" Nov 25 19:39:06 crc kubenswrapper[4775]: I1125 19:39:06.215797 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bdd649c4d-s6vrd" podStartSLOduration=4.215777595 podStartE2EDuration="4.215777595s" podCreationTimestamp="2025-11-25 19:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:39:06.210892629 +0000 UTC m=+328.127255025" watchObservedRunningTime="2025-11-25 19:39:06.215777595 +0000 UTC m=+328.132139951" Nov 25 19:39:06 crc kubenswrapper[4775]: I1125 19:39:06.860999 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5decc231-d109-410b-8ddd-539447b150f9" path="/var/lib/kubelet/pods/5decc231-d109-410b-8ddd-539447b150f9/volumes" Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.118068 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp"] Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.119763 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" podUID="ba721fca-b29a-4157-bc28-8c07078210e0" containerName="route-controller-manager" containerID="cri-o://326c524b513a9b9a33eacf9023068631eee6b2350d763f998c10d6486afcfa79" gracePeriod=30 Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.320772 4775 generic.go:334] "Generic (PLEG): container finished" podID="ba721fca-b29a-4157-bc28-8c07078210e0" containerID="326c524b513a9b9a33eacf9023068631eee6b2350d763f998c10d6486afcfa79" exitCode=0 Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.320863 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" event={"ID":"ba721fca-b29a-4157-bc28-8c07078210e0","Type":"ContainerDied","Data":"326c524b513a9b9a33eacf9023068631eee6b2350d763f998c10d6486afcfa79"} Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.648760 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.806435 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba721fca-b29a-4157-bc28-8c07078210e0-config\") pod \"ba721fca-b29a-4157-bc28-8c07078210e0\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.806561 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba721fca-b29a-4157-bc28-8c07078210e0-client-ca\") pod \"ba721fca-b29a-4157-bc28-8c07078210e0\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.806759 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba721fca-b29a-4157-bc28-8c07078210e0-serving-cert\") pod \"ba721fca-b29a-4157-bc28-8c07078210e0\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.806828 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkmc5\" (UniqueName: \"kubernetes.io/projected/ba721fca-b29a-4157-bc28-8c07078210e0-kube-api-access-vkmc5\") pod \"ba721fca-b29a-4157-bc28-8c07078210e0\" (UID: \"ba721fca-b29a-4157-bc28-8c07078210e0\") " Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.807628 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba721fca-b29a-4157-bc28-8c07078210e0-client-ca" (OuterVolumeSpecName: "client-ca") pod "ba721fca-b29a-4157-bc28-8c07078210e0" (UID: "ba721fca-b29a-4157-bc28-8c07078210e0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.807738 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba721fca-b29a-4157-bc28-8c07078210e0-config" (OuterVolumeSpecName: "config") pod "ba721fca-b29a-4157-bc28-8c07078210e0" (UID: "ba721fca-b29a-4157-bc28-8c07078210e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.808540 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba721fca-b29a-4157-bc28-8c07078210e0-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.808607 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba721fca-b29a-4157-bc28-8c07078210e0-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.815506 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba721fca-b29a-4157-bc28-8c07078210e0-kube-api-access-vkmc5" (OuterVolumeSpecName: "kube-api-access-vkmc5") pod "ba721fca-b29a-4157-bc28-8c07078210e0" (UID: "ba721fca-b29a-4157-bc28-8c07078210e0"). InnerVolumeSpecName "kube-api-access-vkmc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.829747 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba721fca-b29a-4157-bc28-8c07078210e0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ba721fca-b29a-4157-bc28-8c07078210e0" (UID: "ba721fca-b29a-4157-bc28-8c07078210e0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.910298 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba721fca-b29a-4157-bc28-8c07078210e0-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:27 crc kubenswrapper[4775]: I1125 19:39:27.910359 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkmc5\" (UniqueName: \"kubernetes.io/projected/ba721fca-b29a-4157-bc28-8c07078210e0-kube-api-access-vkmc5\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.330827 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" event={"ID":"ba721fca-b29a-4157-bc28-8c07078210e0","Type":"ContainerDied","Data":"230559cfd452085f03b8dc77868cd58a2cfb7cbe70e45ea9cf6960d62e5c4230"} Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.330908 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.330915 4775 scope.go:117] "RemoveContainer" containerID="326c524b513a9b9a33eacf9023068631eee6b2350d763f998c10d6486afcfa79" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.370091 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp"] Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.380771 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9d8fdb97-4g7xp"] Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.503711 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf"] Nov 25 19:39:28 crc kubenswrapper[4775]: E1125 19:39:28.504089 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba721fca-b29a-4157-bc28-8c07078210e0" containerName="route-controller-manager" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.504115 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba721fca-b29a-4157-bc28-8c07078210e0" containerName="route-controller-manager" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.504293 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba721fca-b29a-4157-bc28-8c07078210e0" containerName="route-controller-manager" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.505513 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.508641 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.508725 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.508820 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.508973 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.509878 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.510227 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.534028 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf"] Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.621223 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35869036-d6c6-4601-9226-b664f4d83f1b-client-ca\") pod \"route-controller-manager-68d654cb76-dtxqf\" (UID: \"35869036-d6c6-4601-9226-b664f4d83f1b\") " pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.621389 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb22w\" (UniqueName: \"kubernetes.io/projected/35869036-d6c6-4601-9226-b664f4d83f1b-kube-api-access-fb22w\") pod \"route-controller-manager-68d654cb76-dtxqf\" (UID: \"35869036-d6c6-4601-9226-b664f4d83f1b\") " pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.621454 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35869036-d6c6-4601-9226-b664f4d83f1b-config\") pod \"route-controller-manager-68d654cb76-dtxqf\" (UID: \"35869036-d6c6-4601-9226-b664f4d83f1b\") " pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.621519 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35869036-d6c6-4601-9226-b664f4d83f1b-serving-cert\") pod \"route-controller-manager-68d654cb76-dtxqf\" (UID: \"35869036-d6c6-4601-9226-b664f4d83f1b\") " pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.723016 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb22w\" (UniqueName: \"kubernetes.io/projected/35869036-d6c6-4601-9226-b664f4d83f1b-kube-api-access-fb22w\") pod \"route-controller-manager-68d654cb76-dtxqf\" (UID: \"35869036-d6c6-4601-9226-b664f4d83f1b\") " pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.723095 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35869036-d6c6-4601-9226-b664f4d83f1b-config\") pod \"route-controller-manager-68d654cb76-dtxqf\" (UID: \"35869036-d6c6-4601-9226-b664f4d83f1b\") " pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.723150 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35869036-d6c6-4601-9226-b664f4d83f1b-serving-cert\") pod \"route-controller-manager-68d654cb76-dtxqf\" (UID: \"35869036-d6c6-4601-9226-b664f4d83f1b\") " pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.723224 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35869036-d6c6-4601-9226-b664f4d83f1b-client-ca\") pod \"route-controller-manager-68d654cb76-dtxqf\" (UID: \"35869036-d6c6-4601-9226-b664f4d83f1b\") " pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.726197 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35869036-d6c6-4601-9226-b664f4d83f1b-client-ca\") pod \"route-controller-manager-68d654cb76-dtxqf\" (UID: \"35869036-d6c6-4601-9226-b664f4d83f1b\") " pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.726447 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35869036-d6c6-4601-9226-b664f4d83f1b-config\") pod \"route-controller-manager-68d654cb76-dtxqf\" (UID: \"35869036-d6c6-4601-9226-b664f4d83f1b\") " pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.729138 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35869036-d6c6-4601-9226-b664f4d83f1b-serving-cert\") pod \"route-controller-manager-68d654cb76-dtxqf\" (UID: \"35869036-d6c6-4601-9226-b664f4d83f1b\") " pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.747875 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb22w\" (UniqueName: \"kubernetes.io/projected/35869036-d6c6-4601-9226-b664f4d83f1b-kube-api-access-fb22w\") pod \"route-controller-manager-68d654cb76-dtxqf\" (UID: \"35869036-d6c6-4601-9226-b664f4d83f1b\") " pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.830363 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:28 crc kubenswrapper[4775]: I1125 19:39:28.863769 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba721fca-b29a-4157-bc28-8c07078210e0" path="/var/lib/kubelet/pods/ba721fca-b29a-4157-bc28-8c07078210e0/volumes" Nov 25 19:39:29 crc kubenswrapper[4775]: I1125 19:39:29.286688 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf"] Nov 25 19:39:29 crc kubenswrapper[4775]: I1125 19:39:29.339669 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" event={"ID":"35869036-d6c6-4601-9226-b664f4d83f1b","Type":"ContainerStarted","Data":"3d774b5028f7a2deb9662fefb8f0614e83a0f86557e283ecd5bdaa1861bfe57e"} Nov 25 19:39:30 crc kubenswrapper[4775]: I1125 19:39:30.348141 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" event={"ID":"35869036-d6c6-4601-9226-b664f4d83f1b","Type":"ContainerStarted","Data":"72dfdf05e6d666af292bbfab30a75d2fb02a6209b24e03722aca1f9245d1ae64"} Nov 25 19:39:30 crc kubenswrapper[4775]: I1125 19:39:30.348656 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:30 crc kubenswrapper[4775]: I1125 19:39:30.354969 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" Nov 25 19:39:30 crc kubenswrapper[4775]: I1125 19:39:30.374709 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68d654cb76-dtxqf" podStartSLOduration=3.3746806019999998 podStartE2EDuration="3.374680602s" podCreationTimestamp="2025-11-25 19:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:39:30.371974377 +0000 UTC m=+352.288336783" watchObservedRunningTime="2025-11-25 19:39:30.374680602 +0000 UTC m=+352.291042968" Nov 25 19:39:41 crc kubenswrapper[4775]: I1125 19:39:41.070128 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:39:41 crc kubenswrapper[4775]: I1125 19:39:41.070861 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.118292 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsmsk"] Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.119220 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jsmsk" podUID="25f6b7d2-1661-4d49-8648-2f665206c2e9" containerName="registry-server" containerID="cri-o://3e2185c0afa4f5b972d337b7087a1e08bfb119a23d3fc18b9f8dee8dca2156e1" gracePeriod=30 Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.128267 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hnvt5"] Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.128958 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hnvt5" podUID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" containerName="registry-server" containerID="cri-o://8520d549c9a67cd60bae471fb215e9e3ae8e908cc44faadef963753599b87cf2" gracePeriod=30 Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.142605 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5h4vj"] Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.143207 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" podUID="3566ef9c-3d80-480e-b069-1ff60753877f" containerName="marketplace-operator" containerID="cri-o://d7cb6d42003dc5b3be234d14573df5a22421cd58659236519e473264062b62b9" gracePeriod=30 Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.149895 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-brl6t"] Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.150212 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-brl6t" podUID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" containerName="registry-server" containerID="cri-o://f284c026f059a9fedc8a39245fd1a05f6bfb9045613d4accaceb9a7f41c57fe9" gracePeriod=30 Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.161308 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2698"] Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.161675 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w2698" podUID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" containerName="registry-server" containerID="cri-o://67bd3ffd7ee4a7092ff9ca4be65d5b2a7dc20e26038560dc52b937eeb147b287" gracePeriod=30 Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.173607 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4vkgv"] Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.189038 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.193061 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4vkgv"] Nov 25 19:39:42 crc kubenswrapper[4775]: E1125 19:39:42.216060 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67bd3ffd7ee4a7092ff9ca4be65d5b2a7dc20e26038560dc52b937eeb147b287" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 19:39:42 crc kubenswrapper[4775]: E1125 19:39:42.218732 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67bd3ffd7ee4a7092ff9ca4be65d5b2a7dc20e26038560dc52b937eeb147b287" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 19:39:42 crc kubenswrapper[4775]: E1125 19:39:42.230500 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67bd3ffd7ee4a7092ff9ca4be65d5b2a7dc20e26038560dc52b937eeb147b287" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 19:39:42 crc kubenswrapper[4775]: E1125 19:39:42.230627 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-operators-w2698" podUID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" containerName="registry-server" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.343391 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjxk\" (UniqueName: \"kubernetes.io/projected/a9644641-2767-4104-b381-c7a264debd71-kube-api-access-jbjxk\") pod \"marketplace-operator-79b997595-4vkgv\" (UID: \"a9644641-2767-4104-b381-c7a264debd71\") " pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.343462 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9644641-2767-4104-b381-c7a264debd71-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4vkgv\" (UID: \"a9644641-2767-4104-b381-c7a264debd71\") " pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.343550 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a9644641-2767-4104-b381-c7a264debd71-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4vkgv\" (UID: \"a9644641-2767-4104-b381-c7a264debd71\") " pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.423006 4775 generic.go:334] "Generic (PLEG): container finished" podID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" containerID="f284c026f059a9fedc8a39245fd1a05f6bfb9045613d4accaceb9a7f41c57fe9" exitCode=0 Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.423113 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brl6t" event={"ID":"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1","Type":"ContainerDied","Data":"f284c026f059a9fedc8a39245fd1a05f6bfb9045613d4accaceb9a7f41c57fe9"} Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.426557 4775 generic.go:334] "Generic (PLEG): container finished" podID="25f6b7d2-1661-4d49-8648-2f665206c2e9" containerID="3e2185c0afa4f5b972d337b7087a1e08bfb119a23d3fc18b9f8dee8dca2156e1" exitCode=0 Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.426636 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsmsk" event={"ID":"25f6b7d2-1661-4d49-8648-2f665206c2e9","Type":"ContainerDied","Data":"3e2185c0afa4f5b972d337b7087a1e08bfb119a23d3fc18b9f8dee8dca2156e1"} Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.430035 4775 generic.go:334] "Generic (PLEG): container finished" podID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" containerID="8520d549c9a67cd60bae471fb215e9e3ae8e908cc44faadef963753599b87cf2" exitCode=0 Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.430102 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hnvt5" event={"ID":"3516b667-e83c-45a5-9f21-6bf5e0572b9a","Type":"ContainerDied","Data":"8520d549c9a67cd60bae471fb215e9e3ae8e908cc44faadef963753599b87cf2"} Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.435935 4775 generic.go:334] "Generic (PLEG): container finished" podID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" containerID="67bd3ffd7ee4a7092ff9ca4be65d5b2a7dc20e26038560dc52b937eeb147b287" exitCode=0 Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.436063 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2698" event={"ID":"b45b3f08-fc2c-46cc-b48d-edfc0183c332","Type":"ContainerDied","Data":"67bd3ffd7ee4a7092ff9ca4be65d5b2a7dc20e26038560dc52b937eeb147b287"} Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.444649 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a9644641-2767-4104-b381-c7a264debd71-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4vkgv\" (UID: \"a9644641-2767-4104-b381-c7a264debd71\") " pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.444749 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjxk\" (UniqueName: \"kubernetes.io/projected/a9644641-2767-4104-b381-c7a264debd71-kube-api-access-jbjxk\") pod \"marketplace-operator-79b997595-4vkgv\" (UID: \"a9644641-2767-4104-b381-c7a264debd71\") " pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.444780 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9644641-2767-4104-b381-c7a264debd71-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4vkgv\" (UID: \"a9644641-2767-4104-b381-c7a264debd71\") " pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.446325 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9644641-2767-4104-b381-c7a264debd71-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4vkgv\" (UID: \"a9644641-2767-4104-b381-c7a264debd71\") " pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.446684 4775 generic.go:334] "Generic (PLEG): container finished" podID="3566ef9c-3d80-480e-b069-1ff60753877f" containerID="d7cb6d42003dc5b3be234d14573df5a22421cd58659236519e473264062b62b9" exitCode=0 Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.446723 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" event={"ID":"3566ef9c-3d80-480e-b069-1ff60753877f","Type":"ContainerDied","Data":"d7cb6d42003dc5b3be234d14573df5a22421cd58659236519e473264062b62b9"} Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.446762 4775 scope.go:117] "RemoveContainer" containerID="faf8caab22e1737baddc8abc010b031989665f95cfcfad0880cff713cf4399c1" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.459929 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a9644641-2767-4104-b381-c7a264debd71-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4vkgv\" (UID: \"a9644641-2767-4104-b381-c7a264debd71\") " pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.465359 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjxk\" (UniqueName: \"kubernetes.io/projected/a9644641-2767-4104-b381-c7a264debd71-kube-api-access-jbjxk\") pod \"marketplace-operator-79b997595-4vkgv\" (UID: \"a9644641-2767-4104-b381-c7a264debd71\") " pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.580356 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.644518 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.703149 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.710833 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.712355 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.734790 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.753770 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qfcc\" (UniqueName: \"kubernetes.io/projected/3566ef9c-3d80-480e-b069-1ff60753877f-kube-api-access-8qfcc\") pod \"3566ef9c-3d80-480e-b069-1ff60753877f\" (UID: \"3566ef9c-3d80-480e-b069-1ff60753877f\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.753828 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3566ef9c-3d80-480e-b069-1ff60753877f-marketplace-trusted-ca\") pod \"3566ef9c-3d80-480e-b069-1ff60753877f\" (UID: \"3566ef9c-3d80-480e-b069-1ff60753877f\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.753886 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3566ef9c-3d80-480e-b069-1ff60753877f-marketplace-operator-metrics\") pod \"3566ef9c-3d80-480e-b069-1ff60753877f\" (UID: \"3566ef9c-3d80-480e-b069-1ff60753877f\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.757244 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3566ef9c-3d80-480e-b069-1ff60753877f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3566ef9c-3d80-480e-b069-1ff60753877f" (UID: "3566ef9c-3d80-480e-b069-1ff60753877f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.760448 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3566ef9c-3d80-480e-b069-1ff60753877f-kube-api-access-8qfcc" (OuterVolumeSpecName: "kube-api-access-8qfcc") pod "3566ef9c-3d80-480e-b069-1ff60753877f" (UID: "3566ef9c-3d80-480e-b069-1ff60753877f"). InnerVolumeSpecName "kube-api-access-8qfcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.761570 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3566ef9c-3d80-480e-b069-1ff60753877f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3566ef9c-3d80-480e-b069-1ff60753877f" (UID: "3566ef9c-3d80-480e-b069-1ff60753877f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.854665 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3516b667-e83c-45a5-9f21-6bf5e0572b9a-catalog-content\") pod \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\" (UID: \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.854733 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkkjn\" (UniqueName: \"kubernetes.io/projected/3516b667-e83c-45a5-9f21-6bf5e0572b9a-kube-api-access-vkkjn\") pod \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\" (UID: \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.854757 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p86hz\" (UniqueName: \"kubernetes.io/projected/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-kube-api-access-p86hz\") pod \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\" (UID: \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.854805 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f6b7d2-1661-4d49-8648-2f665206c2e9-utilities\") pod \"25f6b7d2-1661-4d49-8648-2f665206c2e9\" (UID: \"25f6b7d2-1661-4d49-8648-2f665206c2e9\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.854830 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-utilities\") pod \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\" (UID: \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.854873 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f6b7d2-1661-4d49-8648-2f665206c2e9-catalog-content\") pod \"25f6b7d2-1661-4d49-8648-2f665206c2e9\" (UID: \"25f6b7d2-1661-4d49-8648-2f665206c2e9\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.854905 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45b3f08-fc2c-46cc-b48d-edfc0183c332-utilities\") pod \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\" (UID: \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.854931 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3516b667-e83c-45a5-9f21-6bf5e0572b9a-utilities\") pod \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\" (UID: \"3516b667-e83c-45a5-9f21-6bf5e0572b9a\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.854964 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45b3f08-fc2c-46cc-b48d-edfc0183c332-catalog-content\") pod \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\" (UID: \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.854982 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qzbb\" (UniqueName: \"kubernetes.io/projected/b45b3f08-fc2c-46cc-b48d-edfc0183c332-kube-api-access-5qzbb\") pod \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\" (UID: \"b45b3f08-fc2c-46cc-b48d-edfc0183c332\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.855012 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-catalog-content\") pod \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\" (UID: \"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.855032 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwsxx\" (UniqueName: \"kubernetes.io/projected/25f6b7d2-1661-4d49-8648-2f665206c2e9-kube-api-access-wwsxx\") pod \"25f6b7d2-1661-4d49-8648-2f665206c2e9\" (UID: \"25f6b7d2-1661-4d49-8648-2f665206c2e9\") " Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.855222 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qfcc\" (UniqueName: \"kubernetes.io/projected/3566ef9c-3d80-480e-b069-1ff60753877f-kube-api-access-8qfcc\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.855235 4775 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3566ef9c-3d80-480e-b069-1ff60753877f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.855249 4775 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3566ef9c-3d80-480e-b069-1ff60753877f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.855927 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b45b3f08-fc2c-46cc-b48d-edfc0183c332-utilities" (OuterVolumeSpecName: "utilities") pod "b45b3f08-fc2c-46cc-b48d-edfc0183c332" (UID: "b45b3f08-fc2c-46cc-b48d-edfc0183c332"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.856168 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-utilities" (OuterVolumeSpecName: "utilities") pod "6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" (UID: "6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.856178 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f6b7d2-1661-4d49-8648-2f665206c2e9-utilities" (OuterVolumeSpecName: "utilities") pod "25f6b7d2-1661-4d49-8648-2f665206c2e9" (UID: "25f6b7d2-1661-4d49-8648-2f665206c2e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.856292 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3516b667-e83c-45a5-9f21-6bf5e0572b9a-utilities" (OuterVolumeSpecName: "utilities") pod "3516b667-e83c-45a5-9f21-6bf5e0572b9a" (UID: "3516b667-e83c-45a5-9f21-6bf5e0572b9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.861042 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25f6b7d2-1661-4d49-8648-2f665206c2e9-kube-api-access-wwsxx" (OuterVolumeSpecName: "kube-api-access-wwsxx") pod "25f6b7d2-1661-4d49-8648-2f665206c2e9" (UID: "25f6b7d2-1661-4d49-8648-2f665206c2e9"). InnerVolumeSpecName "kube-api-access-wwsxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.861073 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3516b667-e83c-45a5-9f21-6bf5e0572b9a-kube-api-access-vkkjn" (OuterVolumeSpecName: "kube-api-access-vkkjn") pod "3516b667-e83c-45a5-9f21-6bf5e0572b9a" (UID: "3516b667-e83c-45a5-9f21-6bf5e0572b9a"). InnerVolumeSpecName "kube-api-access-vkkjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.862410 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-kube-api-access-p86hz" (OuterVolumeSpecName: "kube-api-access-p86hz") pod "6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" (UID: "6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1"). InnerVolumeSpecName "kube-api-access-p86hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.865572 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45b3f08-fc2c-46cc-b48d-edfc0183c332-kube-api-access-5qzbb" (OuterVolumeSpecName: "kube-api-access-5qzbb") pod "b45b3f08-fc2c-46cc-b48d-edfc0183c332" (UID: "b45b3f08-fc2c-46cc-b48d-edfc0183c332"). InnerVolumeSpecName "kube-api-access-5qzbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.882703 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" (UID: "6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.927037 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3516b667-e83c-45a5-9f21-6bf5e0572b9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3516b667-e83c-45a5-9f21-6bf5e0572b9a" (UID: "3516b667-e83c-45a5-9f21-6bf5e0572b9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.927771 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f6b7d2-1661-4d49-8648-2f665206c2e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25f6b7d2-1661-4d49-8648-2f665206c2e9" (UID: "25f6b7d2-1661-4d49-8648-2f665206c2e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.952595 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b45b3f08-fc2c-46cc-b48d-edfc0183c332-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b45b3f08-fc2c-46cc-b48d-edfc0183c332" (UID: "b45b3f08-fc2c-46cc-b48d-edfc0183c332"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.957109 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f6b7d2-1661-4d49-8648-2f665206c2e9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.957332 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45b3f08-fc2c-46cc-b48d-edfc0183c332-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.957347 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3516b667-e83c-45a5-9f21-6bf5e0572b9a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.957358 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45b3f08-fc2c-46cc-b48d-edfc0183c332-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.957372 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qzbb\" (UniqueName: \"kubernetes.io/projected/b45b3f08-fc2c-46cc-b48d-edfc0183c332-kube-api-access-5qzbb\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.957524 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.958111 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwsxx\" (UniqueName: \"kubernetes.io/projected/25f6b7d2-1661-4d49-8648-2f665206c2e9-kube-api-access-wwsxx\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.958222 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3516b667-e83c-45a5-9f21-6bf5e0572b9a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.958295 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p86hz\" (UniqueName: \"kubernetes.io/projected/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-kube-api-access-p86hz\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.958316 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkkjn\" (UniqueName: \"kubernetes.io/projected/3516b667-e83c-45a5-9f21-6bf5e0572b9a-kube-api-access-vkkjn\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.958331 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f6b7d2-1661-4d49-8648-2f665206c2e9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:42 crc kubenswrapper[4775]: I1125 19:39:42.958357 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.064859 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4vkgv"] Nov 25 19:39:43 crc kubenswrapper[4775]: W1125 19:39:43.075939 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9644641_2767_4104_b381_c7a264debd71.slice/crio-f071e1d4832c57217d714063bd22d49663450b7d59dd85c219bd689602929892 WatchSource:0}: Error finding container f071e1d4832c57217d714063bd22d49663450b7d59dd85c219bd689602929892: Status 404 returned error can't find the container with id f071e1d4832c57217d714063bd22d49663450b7d59dd85c219bd689602929892 Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.460968 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2698" event={"ID":"b45b3f08-fc2c-46cc-b48d-edfc0183c332","Type":"ContainerDied","Data":"c12f63677395509853500f5d355fbe9882697fb853a4c2f618df971ac70b312e"} Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.461530 4775 scope.go:117] "RemoveContainer" containerID="67bd3ffd7ee4a7092ff9ca4be65d5b2a7dc20e26038560dc52b937eeb147b287" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.461020 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2698" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.463210 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" event={"ID":"a9644641-2767-4104-b381-c7a264debd71","Type":"ContainerStarted","Data":"287b42794e51075ce22d809478a6f6f2596832a2d7614f7d8a54f894c7ab2872"} Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.463412 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" event={"ID":"a9644641-2767-4104-b381-c7a264debd71","Type":"ContainerStarted","Data":"f071e1d4832c57217d714063bd22d49663450b7d59dd85c219bd689602929892"} Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.465714 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.466898 4775 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4vkgv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" start-of-body= Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.466950 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" podUID="a9644641-2767-4104-b381-c7a264debd71" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.468112 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" event={"ID":"3566ef9c-3d80-480e-b069-1ff60753877f","Type":"ContainerDied","Data":"002250c77db16f0bb771231857cc900025b35196efc61293539be45edcb6144a"} Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.468331 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5h4vj" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.474075 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brl6t" event={"ID":"6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1","Type":"ContainerDied","Data":"6dcbf5e63909790b333a4d9980207459387e61df22e76d36f580523ff41803a7"} Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.474184 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brl6t" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.487538 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsmsk" event={"ID":"25f6b7d2-1661-4d49-8648-2f665206c2e9","Type":"ContainerDied","Data":"c0c50cd4e9fbaf98c08713dfe9e13ce05f9dd6a1d630bad16995491d735ba540"} Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.487620 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsmsk" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.492667 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hnvt5" event={"ID":"3516b667-e83c-45a5-9f21-6bf5e0572b9a","Type":"ContainerDied","Data":"eb407860ca2a5645cde81d716d4d920e2221ea58587bba6f827e07570da317fc"} Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.492769 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hnvt5" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.499956 4775 scope.go:117] "RemoveContainer" containerID="88854a2016b9e5c5d40bed64daf14652446b291c79fb69b9dc3d16acbe0c2e69" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.530099 4775 scope.go:117] "RemoveContainer" containerID="416eef73c9a3136a2b9cc11f59d36006f90ca1a7f44c760fc12a07a6dd9b27fc" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.530465 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" podStartSLOduration=1.530408155 podStartE2EDuration="1.530408155s" podCreationTimestamp="2025-11-25 19:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:39:43.500149845 +0000 UTC m=+365.416512211" watchObservedRunningTime="2025-11-25 19:39:43.530408155 +0000 UTC m=+365.446770521" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.532423 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2698"] Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.537821 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w2698"] Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.559055 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5h4vj"] Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.559351 4775 scope.go:117] "RemoveContainer" containerID="d7cb6d42003dc5b3be234d14573df5a22421cd58659236519e473264062b62b9" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.562219 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5h4vj"] Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.584067 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-brl6t"] Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.596032 4775 scope.go:117] "RemoveContainer" containerID="f284c026f059a9fedc8a39245fd1a05f6bfb9045613d4accaceb9a7f41c57fe9" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.596336 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-brl6t"] Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.606146 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hnvt5"] Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.612691 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hnvt5"] Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.617005 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsmsk"] Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.620506 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jsmsk"] Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.625872 4775 scope.go:117] "RemoveContainer" containerID="ac3f8b23ca9691e4f9bde0baade6d4ffa10dbe4f01fb9c5ddec31216ce16710d" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.647707 4775 scope.go:117] "RemoveContainer" containerID="cf7620c6384ccf94048acf68f9a1d26fa6a64da3fe29514f730ab700b8017661" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.662926 4775 scope.go:117] "RemoveContainer" containerID="3e2185c0afa4f5b972d337b7087a1e08bfb119a23d3fc18b9f8dee8dca2156e1" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.678778 4775 scope.go:117] "RemoveContainer" containerID="a4641ebaebfc267cb3e684c417fea96db0d5482d1b3ae30bce8473578211171b" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.698809 4775 scope.go:117] "RemoveContainer" containerID="39ce28b2856c8d25388324d2f1ac4f71c7df48dfb702eecfaef99dcb155fc7de" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.714053 4775 scope.go:117] "RemoveContainer" containerID="8520d549c9a67cd60bae471fb215e9e3ae8e908cc44faadef963753599b87cf2" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.733767 4775 scope.go:117] "RemoveContainer" containerID="1c0d45d2c4f1938e156d87b399c3737d5cb55e73c2b1c7af70cbdab3a6293cde" Nov 25 19:39:43 crc kubenswrapper[4775]: I1125 19:39:43.752328 4775 scope.go:117] "RemoveContainer" containerID="eedd9def46401a4f88bd79846419d037d980182c10611505f69a85d19134678a" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.118904 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f92v6"] Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119522 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" containerName="extract-utilities" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119536 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" containerName="extract-utilities" Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119549 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f6b7d2-1661-4d49-8648-2f665206c2e9" containerName="extract-utilities" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119554 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f6b7d2-1661-4d49-8648-2f665206c2e9" containerName="extract-utilities" Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119563 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" containerName="extract-utilities" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119569 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" containerName="extract-utilities" Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119577 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" containerName="extract-content" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119583 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" containerName="extract-content" Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119592 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f6b7d2-1661-4d49-8648-2f665206c2e9" containerName="extract-content" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119598 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f6b7d2-1661-4d49-8648-2f665206c2e9" containerName="extract-content" Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119605 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f6b7d2-1661-4d49-8648-2f665206c2e9" containerName="registry-server" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119611 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f6b7d2-1661-4d49-8648-2f665206c2e9" containerName="registry-server" Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119618 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" containerName="registry-server" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119624 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" containerName="registry-server" Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119634 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" containerName="extract-utilities" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119639 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" containerName="extract-utilities" Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119651 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" containerName="extract-content" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119669 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" containerName="extract-content" Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119677 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3566ef9c-3d80-480e-b069-1ff60753877f" containerName="marketplace-operator" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119683 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3566ef9c-3d80-480e-b069-1ff60753877f" containerName="marketplace-operator" Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119694 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" containerName="registry-server" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119700 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" containerName="registry-server" Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119708 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" containerName="registry-server" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119713 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" containerName="registry-server" Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119723 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" containerName="extract-content" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119731 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" containerName="extract-content" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119815 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" containerName="registry-server" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119826 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="25f6b7d2-1661-4d49-8648-2f665206c2e9" containerName="registry-server" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119834 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" containerName="registry-server" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119845 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" containerName="registry-server" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119853 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3566ef9c-3d80-480e-b069-1ff60753877f" containerName="marketplace-operator" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119861 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3566ef9c-3d80-480e-b069-1ff60753877f" containerName="marketplace-operator" Nov 25 19:39:44 crc kubenswrapper[4775]: E1125 19:39:44.119943 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3566ef9c-3d80-480e-b069-1ff60753877f" containerName="marketplace-operator" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.119950 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3566ef9c-3d80-480e-b069-1ff60753877f" containerName="marketplace-operator" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.120545 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.122692 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.130512 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f92v6"] Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.182239 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a58bfa8-f307-48e3-be90-e1ee238efe08-utilities\") pod \"redhat-marketplace-f92v6\" (UID: \"0a58bfa8-f307-48e3-be90-e1ee238efe08\") " pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.182302 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a58bfa8-f307-48e3-be90-e1ee238efe08-catalog-content\") pod \"redhat-marketplace-f92v6\" (UID: \"0a58bfa8-f307-48e3-be90-e1ee238efe08\") " pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.182331 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwg2w\" (UniqueName: \"kubernetes.io/projected/0a58bfa8-f307-48e3-be90-e1ee238efe08-kube-api-access-kwg2w\") pod \"redhat-marketplace-f92v6\" (UID: \"0a58bfa8-f307-48e3-be90-e1ee238efe08\") " pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.283320 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a58bfa8-f307-48e3-be90-e1ee238efe08-catalog-content\") pod \"redhat-marketplace-f92v6\" (UID: \"0a58bfa8-f307-48e3-be90-e1ee238efe08\") " pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.283394 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwg2w\" (UniqueName: \"kubernetes.io/projected/0a58bfa8-f307-48e3-be90-e1ee238efe08-kube-api-access-kwg2w\") pod \"redhat-marketplace-f92v6\" (UID: \"0a58bfa8-f307-48e3-be90-e1ee238efe08\") " pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.283471 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a58bfa8-f307-48e3-be90-e1ee238efe08-utilities\") pod \"redhat-marketplace-f92v6\" (UID: \"0a58bfa8-f307-48e3-be90-e1ee238efe08\") " pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.284299 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a58bfa8-f307-48e3-be90-e1ee238efe08-utilities\") pod \"redhat-marketplace-f92v6\" (UID: \"0a58bfa8-f307-48e3-be90-e1ee238efe08\") " pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.284331 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a58bfa8-f307-48e3-be90-e1ee238efe08-catalog-content\") pod \"redhat-marketplace-f92v6\" (UID: \"0a58bfa8-f307-48e3-be90-e1ee238efe08\") " pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.306821 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwg2w\" (UniqueName: \"kubernetes.io/projected/0a58bfa8-f307-48e3-be90-e1ee238efe08-kube-api-access-kwg2w\") pod \"redhat-marketplace-f92v6\" (UID: \"0a58bfa8-f307-48e3-be90-e1ee238efe08\") " pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.446247 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.510797 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4vkgv" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.720514 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nk42n"] Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.722119 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.727780 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nk42n"] Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.730856 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.858173 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25f6b7d2-1661-4d49-8648-2f665206c2e9" path="/var/lib/kubelet/pods/25f6b7d2-1661-4d49-8648-2f665206c2e9/volumes" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.858964 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3516b667-e83c-45a5-9f21-6bf5e0572b9a" path="/var/lib/kubelet/pods/3516b667-e83c-45a5-9f21-6bf5e0572b9a/volumes" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.859555 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3566ef9c-3d80-480e-b069-1ff60753877f" path="/var/lib/kubelet/pods/3566ef9c-3d80-480e-b069-1ff60753877f/volumes" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.860527 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1" path="/var/lib/kubelet/pods/6d8ac5d6-cdb8-4bf0-8c8c-1970864a85d1/volumes" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.861178 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45b3f08-fc2c-46cc-b48d-edfc0183c332" path="/var/lib/kubelet/pods/b45b3f08-fc2c-46cc-b48d-edfc0183c332/volumes" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.883060 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f92v6"] Nov 25 19:39:44 crc kubenswrapper[4775]: W1125 19:39:44.889557 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a58bfa8_f307_48e3_be90_e1ee238efe08.slice/crio-a09b4728d70ff6ca6db5a95917116b1a09525ee6e080012bf9a3ed43f03a0205 WatchSource:0}: Error finding container a09b4728d70ff6ca6db5a95917116b1a09525ee6e080012bf9a3ed43f03a0205: Status 404 returned error can't find the container with id a09b4728d70ff6ca6db5a95917116b1a09525ee6e080012bf9a3ed43f03a0205 Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.889931 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ddc286-2bb2-438f-9a74-6279f0f76753-catalog-content\") pod \"certified-operators-nk42n\" (UID: \"e5ddc286-2bb2-438f-9a74-6279f0f76753\") " pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.889974 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ddc286-2bb2-438f-9a74-6279f0f76753-utilities\") pod \"certified-operators-nk42n\" (UID: \"e5ddc286-2bb2-438f-9a74-6279f0f76753\") " pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.890063 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fftfk\" (UniqueName: \"kubernetes.io/projected/e5ddc286-2bb2-438f-9a74-6279f0f76753-kube-api-access-fftfk\") pod \"certified-operators-nk42n\" (UID: \"e5ddc286-2bb2-438f-9a74-6279f0f76753\") " pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.991108 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fftfk\" (UniqueName: \"kubernetes.io/projected/e5ddc286-2bb2-438f-9a74-6279f0f76753-kube-api-access-fftfk\") pod \"certified-operators-nk42n\" (UID: \"e5ddc286-2bb2-438f-9a74-6279f0f76753\") " pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.991229 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ddc286-2bb2-438f-9a74-6279f0f76753-catalog-content\") pod \"certified-operators-nk42n\" (UID: \"e5ddc286-2bb2-438f-9a74-6279f0f76753\") " pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.991283 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ddc286-2bb2-438f-9a74-6279f0f76753-utilities\") pod \"certified-operators-nk42n\" (UID: \"e5ddc286-2bb2-438f-9a74-6279f0f76753\") " pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.991932 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ddc286-2bb2-438f-9a74-6279f0f76753-utilities\") pod \"certified-operators-nk42n\" (UID: \"e5ddc286-2bb2-438f-9a74-6279f0f76753\") " pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:44 crc kubenswrapper[4775]: I1125 19:39:44.992173 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ddc286-2bb2-438f-9a74-6279f0f76753-catalog-content\") pod \"certified-operators-nk42n\" (UID: \"e5ddc286-2bb2-438f-9a74-6279f0f76753\") " pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:45 crc kubenswrapper[4775]: I1125 19:39:45.012095 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fftfk\" (UniqueName: \"kubernetes.io/projected/e5ddc286-2bb2-438f-9a74-6279f0f76753-kube-api-access-fftfk\") pod \"certified-operators-nk42n\" (UID: \"e5ddc286-2bb2-438f-9a74-6279f0f76753\") " pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:45 crc kubenswrapper[4775]: I1125 19:39:45.048978 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:45 crc kubenswrapper[4775]: I1125 19:39:45.457370 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nk42n"] Nov 25 19:39:45 crc kubenswrapper[4775]: I1125 19:39:45.513449 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a58bfa8-f307-48e3-be90-e1ee238efe08" containerID="d70046be8cd54c19a1c864f451146b430431dcb50096af690184481189fbc0b6" exitCode=0 Nov 25 19:39:45 crc kubenswrapper[4775]: I1125 19:39:45.513547 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f92v6" event={"ID":"0a58bfa8-f307-48e3-be90-e1ee238efe08","Type":"ContainerDied","Data":"d70046be8cd54c19a1c864f451146b430431dcb50096af690184481189fbc0b6"} Nov 25 19:39:45 crc kubenswrapper[4775]: I1125 19:39:45.513579 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f92v6" event={"ID":"0a58bfa8-f307-48e3-be90-e1ee238efe08","Type":"ContainerStarted","Data":"a09b4728d70ff6ca6db5a95917116b1a09525ee6e080012bf9a3ed43f03a0205"} Nov 25 19:39:45 crc kubenswrapper[4775]: I1125 19:39:45.515078 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk42n" event={"ID":"e5ddc286-2bb2-438f-9a74-6279f0f76753","Type":"ContainerStarted","Data":"61ea9db72acc91ae897fd0ffc991a4353c576e2a330d63f4564f62f31f86e929"} Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.527581 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f92v6" event={"ID":"0a58bfa8-f307-48e3-be90-e1ee238efe08","Type":"ContainerStarted","Data":"31d24e63776888f7f42207750f3eabc7c406bc66d8c9df8675cd1dfee164e848"} Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.529145 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gvqrf"] Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.530937 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.533539 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.539542 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gvqrf"] Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.554684 4775 generic.go:334] "Generic (PLEG): container finished" podID="e5ddc286-2bb2-438f-9a74-6279f0f76753" containerID="67d5bab39790c2ecaaa7c7efb294f4553372bbcd5de6025ac7bad976679f5b5c" exitCode=0 Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.555631 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk42n" event={"ID":"e5ddc286-2bb2-438f-9a74-6279f0f76753","Type":"ContainerDied","Data":"67d5bab39790c2ecaaa7c7efb294f4553372bbcd5de6025ac7bad976679f5b5c"} Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.715515 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77c7cb6e-134b-4de7-8d55-21a7e73705e2-utilities\") pod \"redhat-operators-gvqrf\" (UID: \"77c7cb6e-134b-4de7-8d55-21a7e73705e2\") " pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.715596 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77c7cb6e-134b-4de7-8d55-21a7e73705e2-catalog-content\") pod \"redhat-operators-gvqrf\" (UID: \"77c7cb6e-134b-4de7-8d55-21a7e73705e2\") " pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.715621 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfqkg\" (UniqueName: \"kubernetes.io/projected/77c7cb6e-134b-4de7-8d55-21a7e73705e2-kube-api-access-vfqkg\") pod \"redhat-operators-gvqrf\" (UID: \"77c7cb6e-134b-4de7-8d55-21a7e73705e2\") " pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.817074 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77c7cb6e-134b-4de7-8d55-21a7e73705e2-catalog-content\") pod \"redhat-operators-gvqrf\" (UID: \"77c7cb6e-134b-4de7-8d55-21a7e73705e2\") " pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.817142 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfqkg\" (UniqueName: \"kubernetes.io/projected/77c7cb6e-134b-4de7-8d55-21a7e73705e2-kube-api-access-vfqkg\") pod \"redhat-operators-gvqrf\" (UID: \"77c7cb6e-134b-4de7-8d55-21a7e73705e2\") " pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.817503 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77c7cb6e-134b-4de7-8d55-21a7e73705e2-utilities\") pod \"redhat-operators-gvqrf\" (UID: \"77c7cb6e-134b-4de7-8d55-21a7e73705e2\") " pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.818177 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77c7cb6e-134b-4de7-8d55-21a7e73705e2-catalog-content\") pod \"redhat-operators-gvqrf\" (UID: \"77c7cb6e-134b-4de7-8d55-21a7e73705e2\") " pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.818324 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77c7cb6e-134b-4de7-8d55-21a7e73705e2-utilities\") pod \"redhat-operators-gvqrf\" (UID: \"77c7cb6e-134b-4de7-8d55-21a7e73705e2\") " pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.844684 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfqkg\" (UniqueName: \"kubernetes.io/projected/77c7cb6e-134b-4de7-8d55-21a7e73705e2-kube-api-access-vfqkg\") pod \"redhat-operators-gvqrf\" (UID: \"77c7cb6e-134b-4de7-8d55-21a7e73705e2\") " pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:46 crc kubenswrapper[4775]: I1125 19:39:46.897912 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.129952 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k2fd4"] Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.131985 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.134213 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.134916 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k2fd4"] Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.327496 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-catalog-content\") pod \"community-operators-k2fd4\" (UID: \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\") " pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.328544 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqpdq\" (UniqueName: \"kubernetes.io/projected/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-kube-api-access-kqpdq\") pod \"community-operators-k2fd4\" (UID: \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\") " pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.328694 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-utilities\") pod \"community-operators-k2fd4\" (UID: \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\") " pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.335052 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gvqrf"] Nov 25 19:39:47 crc kubenswrapper[4775]: W1125 19:39:47.342252 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77c7cb6e_134b_4de7_8d55_21a7e73705e2.slice/crio-1d721ce47c9f8984709b7a0623aea2c28ae413be46fc0e978987f01d246aeb36 WatchSource:0}: Error finding container 1d721ce47c9f8984709b7a0623aea2c28ae413be46fc0e978987f01d246aeb36: Status 404 returned error can't find the container with id 1d721ce47c9f8984709b7a0623aea2c28ae413be46fc0e978987f01d246aeb36 Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.429991 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqpdq\" (UniqueName: \"kubernetes.io/projected/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-kube-api-access-kqpdq\") pod \"community-operators-k2fd4\" (UID: \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\") " pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.430049 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-utilities\") pod \"community-operators-k2fd4\" (UID: \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\") " pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.430078 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-catalog-content\") pod \"community-operators-k2fd4\" (UID: \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\") " pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.430503 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-catalog-content\") pod \"community-operators-k2fd4\" (UID: \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\") " pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.430697 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-utilities\") pod \"community-operators-k2fd4\" (UID: \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\") " pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.462755 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqpdq\" (UniqueName: \"kubernetes.io/projected/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-kube-api-access-kqpdq\") pod \"community-operators-k2fd4\" (UID: \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\") " pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.562418 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a58bfa8-f307-48e3-be90-e1ee238efe08" containerID="31d24e63776888f7f42207750f3eabc7c406bc66d8c9df8675cd1dfee164e848" exitCode=0 Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.562516 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f92v6" event={"ID":"0a58bfa8-f307-48e3-be90-e1ee238efe08","Type":"ContainerDied","Data":"31d24e63776888f7f42207750f3eabc7c406bc66d8c9df8675cd1dfee164e848"} Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.566057 4775 generic.go:334] "Generic (PLEG): container finished" podID="77c7cb6e-134b-4de7-8d55-21a7e73705e2" containerID="72a26443a4098fb1cf5f6af3683f2d8d4c89c7566034a8a71b34f6bd38cdd931" exitCode=0 Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.566105 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gvqrf" event={"ID":"77c7cb6e-134b-4de7-8d55-21a7e73705e2","Type":"ContainerDied","Data":"72a26443a4098fb1cf5f6af3683f2d8d4c89c7566034a8a71b34f6bd38cdd931"} Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.566126 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gvqrf" event={"ID":"77c7cb6e-134b-4de7-8d55-21a7e73705e2","Type":"ContainerStarted","Data":"1d721ce47c9f8984709b7a0623aea2c28ae413be46fc0e978987f01d246aeb36"} Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.570291 4775 generic.go:334] "Generic (PLEG): container finished" podID="e5ddc286-2bb2-438f-9a74-6279f0f76753" containerID="43212c85c8ff5b1c213ff766cb58697a705eab26719e8f1147167aa5a7e80da4" exitCode=0 Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.570318 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk42n" event={"ID":"e5ddc286-2bb2-438f-9a74-6279f0f76753","Type":"ContainerDied","Data":"43212c85c8ff5b1c213ff766cb58697a705eab26719e8f1147167aa5a7e80da4"} Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.755247 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:47 crc kubenswrapper[4775]: I1125 19:39:47.987236 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k2fd4"] Nov 25 19:39:48 crc kubenswrapper[4775]: I1125 19:39:48.575751 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f92v6" event={"ID":"0a58bfa8-f307-48e3-be90-e1ee238efe08","Type":"ContainerStarted","Data":"388e9738a3d60aa42647b660906c2c8ccc92e5c52a4051b5cef021736dfb9a23"} Nov 25 19:39:48 crc kubenswrapper[4775]: I1125 19:39:48.579304 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk42n" event={"ID":"e5ddc286-2bb2-438f-9a74-6279f0f76753","Type":"ContainerStarted","Data":"49a1147bb7869cc5830822435195940de9e0791744209a9a01277e8cd1c356d0"} Nov 25 19:39:48 crc kubenswrapper[4775]: I1125 19:39:48.582176 4775 generic.go:334] "Generic (PLEG): container finished" podID="1a73c6c1-fdc3-44b3-9b26-13821b4a7619" containerID="64f77387db3e59dbd567262a0ab663a225517a7b8c11b68b2d851c9542b4c56d" exitCode=0 Nov 25 19:39:48 crc kubenswrapper[4775]: I1125 19:39:48.582219 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2fd4" event={"ID":"1a73c6c1-fdc3-44b3-9b26-13821b4a7619","Type":"ContainerDied","Data":"64f77387db3e59dbd567262a0ab663a225517a7b8c11b68b2d851c9542b4c56d"} Nov 25 19:39:48 crc kubenswrapper[4775]: I1125 19:39:48.582255 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2fd4" event={"ID":"1a73c6c1-fdc3-44b3-9b26-13821b4a7619","Type":"ContainerStarted","Data":"1fbeae41095cb24daa97d7c2f86203760ef161d344d3e280dbe060bd325a1a6e"} Nov 25 19:39:48 crc kubenswrapper[4775]: I1125 19:39:48.604077 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f92v6" podStartSLOduration=2.092646073 podStartE2EDuration="4.604054037s" podCreationTimestamp="2025-11-25 19:39:44 +0000 UTC" firstStartedPulling="2025-11-25 19:39:45.514612937 +0000 UTC m=+367.430975303" lastFinishedPulling="2025-11-25 19:39:48.026020901 +0000 UTC m=+369.942383267" observedRunningTime="2025-11-25 19:39:48.600105667 +0000 UTC m=+370.516468033" watchObservedRunningTime="2025-11-25 19:39:48.604054037 +0000 UTC m=+370.520416393" Nov 25 19:39:48 crc kubenswrapper[4775]: I1125 19:39:48.624215 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nk42n" podStartSLOduration=3.218549906 podStartE2EDuration="4.624199337s" podCreationTimestamp="2025-11-25 19:39:44 +0000 UTC" firstStartedPulling="2025-11-25 19:39:46.55774062 +0000 UTC m=+368.474102986" lastFinishedPulling="2025-11-25 19:39:47.963390051 +0000 UTC m=+369.879752417" observedRunningTime="2025-11-25 19:39:48.621418029 +0000 UTC m=+370.537780395" watchObservedRunningTime="2025-11-25 19:39:48.624199337 +0000 UTC m=+370.540561693" Nov 25 19:39:49 crc kubenswrapper[4775]: I1125 19:39:49.589361 4775 generic.go:334] "Generic (PLEG): container finished" podID="77c7cb6e-134b-4de7-8d55-21a7e73705e2" containerID="b60a1853794efe569cc2795ec230619b98ebbb88b419904ee817b1ae53c12490" exitCode=0 Nov 25 19:39:49 crc kubenswrapper[4775]: I1125 19:39:49.589468 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gvqrf" event={"ID":"77c7cb6e-134b-4de7-8d55-21a7e73705e2","Type":"ContainerDied","Data":"b60a1853794efe569cc2795ec230619b98ebbb88b419904ee817b1ae53c12490"} Nov 25 19:39:49 crc kubenswrapper[4775]: I1125 19:39:49.594344 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2fd4" event={"ID":"1a73c6c1-fdc3-44b3-9b26-13821b4a7619","Type":"ContainerStarted","Data":"ff2cb810cae92dbddc55a51e158efc8f6f5544266256e1ee6b02ccda0aecaa55"} Nov 25 19:39:50 crc kubenswrapper[4775]: I1125 19:39:50.600779 4775 generic.go:334] "Generic (PLEG): container finished" podID="1a73c6c1-fdc3-44b3-9b26-13821b4a7619" containerID="ff2cb810cae92dbddc55a51e158efc8f6f5544266256e1ee6b02ccda0aecaa55" exitCode=0 Nov 25 19:39:50 crc kubenswrapper[4775]: I1125 19:39:50.600966 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2fd4" event={"ID":"1a73c6c1-fdc3-44b3-9b26-13821b4a7619","Type":"ContainerDied","Data":"ff2cb810cae92dbddc55a51e158efc8f6f5544266256e1ee6b02ccda0aecaa55"} Nov 25 19:39:50 crc kubenswrapper[4775]: I1125 19:39:50.604622 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gvqrf" event={"ID":"77c7cb6e-134b-4de7-8d55-21a7e73705e2","Type":"ContainerStarted","Data":"0919587cfc45d5227adca1b20d4c44e8fcac18a8c2c1bee752040ae7a068729b"} Nov 25 19:39:52 crc kubenswrapper[4775]: I1125 19:39:52.618735 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2fd4" event={"ID":"1a73c6c1-fdc3-44b3-9b26-13821b4a7619","Type":"ContainerStarted","Data":"05bf8de2aa67ffdd7cfaa742d08f343b48a3474ecf2d59212ba0fb0421fc07fd"} Nov 25 19:39:52 crc kubenswrapper[4775]: I1125 19:39:52.640734 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k2fd4" podStartSLOduration=3.162192854 podStartE2EDuration="5.640700295s" podCreationTimestamp="2025-11-25 19:39:47 +0000 UTC" firstStartedPulling="2025-11-25 19:39:48.584826092 +0000 UTC m=+370.501188458" lastFinishedPulling="2025-11-25 19:39:51.063333533 +0000 UTC m=+372.979695899" observedRunningTime="2025-11-25 19:39:52.636232951 +0000 UTC m=+374.552595317" watchObservedRunningTime="2025-11-25 19:39:52.640700295 +0000 UTC m=+374.557062661" Nov 25 19:39:52 crc kubenswrapper[4775]: I1125 19:39:52.642155 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gvqrf" podStartSLOduration=4.193182308 podStartE2EDuration="6.642149566s" podCreationTimestamp="2025-11-25 19:39:46 +0000 UTC" firstStartedPulling="2025-11-25 19:39:47.567912378 +0000 UTC m=+369.484274754" lastFinishedPulling="2025-11-25 19:39:50.016879626 +0000 UTC m=+371.933242012" observedRunningTime="2025-11-25 19:39:50.6520026 +0000 UTC m=+372.568364976" watchObservedRunningTime="2025-11-25 19:39:52.642149566 +0000 UTC m=+374.558511922" Nov 25 19:39:54 crc kubenswrapper[4775]: I1125 19:39:54.446565 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:54 crc kubenswrapper[4775]: I1125 19:39:54.446704 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:54 crc kubenswrapper[4775]: I1125 19:39:54.492808 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:54 crc kubenswrapper[4775]: I1125 19:39:54.674942 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f92v6" Nov 25 19:39:55 crc kubenswrapper[4775]: I1125 19:39:55.049479 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:55 crc kubenswrapper[4775]: I1125 19:39:55.049530 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:55 crc kubenswrapper[4775]: I1125 19:39:55.090233 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:55 crc kubenswrapper[4775]: I1125 19:39:55.708639 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nk42n" Nov 25 19:39:56 crc kubenswrapper[4775]: I1125 19:39:56.898917 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:56 crc kubenswrapper[4775]: I1125 19:39:56.899408 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:56 crc kubenswrapper[4775]: I1125 19:39:56.948636 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.430160 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9kj5h"] Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.431693 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.456100 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9kj5h"] Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.593409 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-trusted-ca\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.593531 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-registry-tls\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.593584 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bwhc\" (UniqueName: \"kubernetes.io/projected/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-kube-api-access-5bwhc\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.593701 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.593743 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-bound-sa-token\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.593785 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.593823 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-registry-certificates\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.593937 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.624733 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.695422 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-trusted-ca\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.695498 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-registry-tls\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.695537 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bwhc\" (UniqueName: \"kubernetes.io/projected/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-kube-api-access-5bwhc\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.695611 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-bound-sa-token\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.696193 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.696222 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-registry-certificates\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.696252 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.697743 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-registry-certificates\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.698068 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.700250 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-trusted-ca\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.711548 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.715420 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-bound-sa-token\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.716127 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gvqrf" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.717124 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-registry-tls\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.721907 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bwhc\" (UniqueName: \"kubernetes.io/projected/8eaedceb-d02f-4495-b80b-cddbc4d0b96c-kube-api-access-5bwhc\") pod \"image-registry-66df7c8f76-9kj5h\" (UID: \"8eaedceb-d02f-4495-b80b-cddbc4d0b96c\") " pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.755405 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.755450 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.791351 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:39:57 crc kubenswrapper[4775]: I1125 19:39:57.804863 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:58 crc kubenswrapper[4775]: I1125 19:39:58.240061 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9kj5h"] Nov 25 19:39:58 crc kubenswrapper[4775]: W1125 19:39:58.247365 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eaedceb_d02f_4495_b80b_cddbc4d0b96c.slice/crio-b6c871cadbed5a6930e22c41ecf5e49c784fe04af1fc30d3dc315c9d2857e86c WatchSource:0}: Error finding container b6c871cadbed5a6930e22c41ecf5e49c784fe04af1fc30d3dc315c9d2857e86c: Status 404 returned error can't find the container with id b6c871cadbed5a6930e22c41ecf5e49c784fe04af1fc30d3dc315c9d2857e86c Nov 25 19:39:58 crc kubenswrapper[4775]: I1125 19:39:58.660345 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" event={"ID":"8eaedceb-d02f-4495-b80b-cddbc4d0b96c","Type":"ContainerStarted","Data":"57a4631affc2e628d14c9f11cae1d59745550547e7d0e10c42f6a3a07fd57bd5"} Nov 25 19:39:58 crc kubenswrapper[4775]: I1125 19:39:58.661029 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" event={"ID":"8eaedceb-d02f-4495-b80b-cddbc4d0b96c","Type":"ContainerStarted","Data":"b6c871cadbed5a6930e22c41ecf5e49c784fe04af1fc30d3dc315c9d2857e86c"} Nov 25 19:39:58 crc kubenswrapper[4775]: I1125 19:39:58.744163 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k2fd4" Nov 25 19:39:58 crc kubenswrapper[4775]: I1125 19:39:58.770898 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" podStartSLOduration=1.770865282 podStartE2EDuration="1.770865282s" podCreationTimestamp="2025-11-25 19:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:39:58.686350584 +0000 UTC m=+380.602712970" watchObservedRunningTime="2025-11-25 19:39:58.770865282 +0000 UTC m=+380.687227648" Nov 25 19:39:59 crc kubenswrapper[4775]: I1125 19:39:59.666075 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:40:11 crc kubenswrapper[4775]: I1125 19:40:11.070732 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:40:11 crc kubenswrapper[4775]: I1125 19:40:11.071683 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:40:17 crc kubenswrapper[4775]: I1125 19:40:17.801494 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9kj5h" Nov 25 19:40:17 crc kubenswrapper[4775]: I1125 19:40:17.899080 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-75q9h"] Nov 25 19:40:41 crc kubenswrapper[4775]: I1125 19:40:41.071187 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:40:41 crc kubenswrapper[4775]: I1125 19:40:41.072039 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:40:41 crc kubenswrapper[4775]: I1125 19:40:41.072113 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:40:41 crc kubenswrapper[4775]: I1125 19:40:41.073170 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68860d36c20c28f09e5eee4f954a6781074667da8ec5ed23c8a9114454a7a494"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 19:40:41 crc kubenswrapper[4775]: I1125 19:40:41.073287 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://68860d36c20c28f09e5eee4f954a6781074667da8ec5ed23c8a9114454a7a494" gracePeriod=600 Nov 25 19:40:41 crc kubenswrapper[4775]: I1125 19:40:41.968689 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="68860d36c20c28f09e5eee4f954a6781074667da8ec5ed23c8a9114454a7a494" exitCode=0 Nov 25 19:40:41 crc kubenswrapper[4775]: I1125 19:40:41.968781 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"68860d36c20c28f09e5eee4f954a6781074667da8ec5ed23c8a9114454a7a494"} Nov 25 19:40:41 crc kubenswrapper[4775]: I1125 19:40:41.969090 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"814f270a52200f75169128bcfe904e73985125f44369ce9e0392e2533ead19f8"} Nov 25 19:40:41 crc kubenswrapper[4775]: I1125 19:40:41.969114 4775 scope.go:117] "RemoveContainer" containerID="8a23324611bd8bf83418e03d6c602b761c68306866fcf1a4f035487bc10dbf6c" Nov 25 19:40:42 crc kubenswrapper[4775]: I1125 19:40:42.956548 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" podUID="ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" containerName="registry" containerID="cri-o://3871289941d035e7a7d6d2d7fdf95c00f30f0df6c706d478c36afe797c1636ae" gracePeriod=30 Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.448775 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.551698 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-trusted-ca\") pod \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.551757 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-registry-certificates\") pod \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.551789 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-installation-pull-secrets\") pod \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.551846 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-registry-tls\") pod \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.551886 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-bound-sa-token\") pod \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.551909 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x66n8\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-kube-api-access-x66n8\") pod \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.552085 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.552136 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-ca-trust-extracted\") pod \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\" (UID: \"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e\") " Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.552888 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.553133 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.559532 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.559689 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.560188 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.561259 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-kube-api-access-x66n8" (OuterVolumeSpecName: "kube-api-access-x66n8") pod "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e"). InnerVolumeSpecName "kube-api-access-x66n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.562043 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.571988 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" (UID: "ca4b44ae-0ced-4acf-aa65-92a6fda3f98e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.653632 4775 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.653688 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.653698 4775 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.653709 4775 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.653717 4775 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.653726 4775 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.653734 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x66n8\" (UniqueName: \"kubernetes.io/projected/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e-kube-api-access-x66n8\") on node \"crc\" DevicePath \"\"" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.994939 4775 generic.go:334] "Generic (PLEG): container finished" podID="ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" containerID="3871289941d035e7a7d6d2d7fdf95c00f30f0df6c706d478c36afe797c1636ae" exitCode=0 Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.995025 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" event={"ID":"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e","Type":"ContainerDied","Data":"3871289941d035e7a7d6d2d7fdf95c00f30f0df6c706d478c36afe797c1636ae"} Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.995099 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" event={"ID":"ca4b44ae-0ced-4acf-aa65-92a6fda3f98e","Type":"ContainerDied","Data":"bc852cdebed05722143dfb25b1a2d01fe742487a7a5669b0a3da1504c5016669"} Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.995130 4775 scope.go:117] "RemoveContainer" containerID="3871289941d035e7a7d6d2d7fdf95c00f30f0df6c706d478c36afe797c1636ae" Nov 25 19:40:43 crc kubenswrapper[4775]: I1125 19:40:43.995773 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-75q9h" Nov 25 19:40:44 crc kubenswrapper[4775]: I1125 19:40:44.036873 4775 scope.go:117] "RemoveContainer" containerID="3871289941d035e7a7d6d2d7fdf95c00f30f0df6c706d478c36afe797c1636ae" Nov 25 19:40:44 crc kubenswrapper[4775]: E1125 19:40:44.040414 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3871289941d035e7a7d6d2d7fdf95c00f30f0df6c706d478c36afe797c1636ae\": container with ID starting with 3871289941d035e7a7d6d2d7fdf95c00f30f0df6c706d478c36afe797c1636ae not found: ID does not exist" containerID="3871289941d035e7a7d6d2d7fdf95c00f30f0df6c706d478c36afe797c1636ae" Nov 25 19:40:44 crc kubenswrapper[4775]: I1125 19:40:44.040501 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3871289941d035e7a7d6d2d7fdf95c00f30f0df6c706d478c36afe797c1636ae"} err="failed to get container status \"3871289941d035e7a7d6d2d7fdf95c00f30f0df6c706d478c36afe797c1636ae\": rpc error: code = NotFound desc = could not find container \"3871289941d035e7a7d6d2d7fdf95c00f30f0df6c706d478c36afe797c1636ae\": container with ID starting with 3871289941d035e7a7d6d2d7fdf95c00f30f0df6c706d478c36afe797c1636ae not found: ID does not exist" Nov 25 19:40:44 crc kubenswrapper[4775]: I1125 19:40:44.050744 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-75q9h"] Nov 25 19:40:44 crc kubenswrapper[4775]: I1125 19:40:44.061176 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-75q9h"] Nov 25 19:40:44 crc kubenswrapper[4775]: I1125 19:40:44.859443 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" path="/var/lib/kubelet/pods/ca4b44ae-0ced-4acf-aa65-92a6fda3f98e/volumes" Nov 25 19:42:41 crc kubenswrapper[4775]: I1125 19:42:41.070136 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:42:41 crc kubenswrapper[4775]: I1125 19:42:41.070905 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:43:11 crc kubenswrapper[4775]: I1125 19:43:11.070438 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:43:11 crc kubenswrapper[4775]: I1125 19:43:11.071476 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:43:41 crc kubenswrapper[4775]: I1125 19:43:41.071016 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:43:41 crc kubenswrapper[4775]: I1125 19:43:41.072528 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:43:41 crc kubenswrapper[4775]: I1125 19:43:41.072731 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:43:41 crc kubenswrapper[4775]: I1125 19:43:41.073194 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"814f270a52200f75169128bcfe904e73985125f44369ce9e0392e2533ead19f8"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 19:43:41 crc kubenswrapper[4775]: I1125 19:43:41.073322 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://814f270a52200f75169128bcfe904e73985125f44369ce9e0392e2533ead19f8" gracePeriod=600 Nov 25 19:43:41 crc kubenswrapper[4775]: I1125 19:43:41.212210 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="814f270a52200f75169128bcfe904e73985125f44369ce9e0392e2533ead19f8" exitCode=0 Nov 25 19:43:41 crc kubenswrapper[4775]: I1125 19:43:41.212273 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"814f270a52200f75169128bcfe904e73985125f44369ce9e0392e2533ead19f8"} Nov 25 19:43:41 crc kubenswrapper[4775]: I1125 19:43:41.212329 4775 scope.go:117] "RemoveContainer" containerID="68860d36c20c28f09e5eee4f954a6781074667da8ec5ed23c8a9114454a7a494" Nov 25 19:43:42 crc kubenswrapper[4775]: I1125 19:43:42.222489 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"f9e60c7320dcbc3b2c5ac1396fe8089095784ebc9e95a14db7f39bea21a7ea59"} Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.181111 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn"] Nov 25 19:45:00 crc kubenswrapper[4775]: E1125 19:45:00.181926 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" containerName="registry" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.181942 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" containerName="registry" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.182055 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4b44ae-0ced-4acf-aa65-92a6fda3f98e" containerName="registry" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.182490 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.184003 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.184508 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.199436 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn"] Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.284701 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e47008dd-7c5a-45e1-af24-e6726af501ea-config-volume\") pod \"collect-profiles-29401665-xhlbn\" (UID: \"e47008dd-7c5a-45e1-af24-e6726af501ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.284790 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e47008dd-7c5a-45e1-af24-e6726af501ea-secret-volume\") pod \"collect-profiles-29401665-xhlbn\" (UID: \"e47008dd-7c5a-45e1-af24-e6726af501ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.284823 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk48c\" (UniqueName: \"kubernetes.io/projected/e47008dd-7c5a-45e1-af24-e6726af501ea-kube-api-access-xk48c\") pod \"collect-profiles-29401665-xhlbn\" (UID: \"e47008dd-7c5a-45e1-af24-e6726af501ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.386185 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e47008dd-7c5a-45e1-af24-e6726af501ea-config-volume\") pod \"collect-profiles-29401665-xhlbn\" (UID: \"e47008dd-7c5a-45e1-af24-e6726af501ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.386257 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e47008dd-7c5a-45e1-af24-e6726af501ea-secret-volume\") pod \"collect-profiles-29401665-xhlbn\" (UID: \"e47008dd-7c5a-45e1-af24-e6726af501ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.386288 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk48c\" (UniqueName: \"kubernetes.io/projected/e47008dd-7c5a-45e1-af24-e6726af501ea-kube-api-access-xk48c\") pod \"collect-profiles-29401665-xhlbn\" (UID: \"e47008dd-7c5a-45e1-af24-e6726af501ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.387421 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e47008dd-7c5a-45e1-af24-e6726af501ea-config-volume\") pod \"collect-profiles-29401665-xhlbn\" (UID: \"e47008dd-7c5a-45e1-af24-e6726af501ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.405214 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk48c\" (UniqueName: \"kubernetes.io/projected/e47008dd-7c5a-45e1-af24-e6726af501ea-kube-api-access-xk48c\") pod \"collect-profiles-29401665-xhlbn\" (UID: \"e47008dd-7c5a-45e1-af24-e6726af501ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.407282 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e47008dd-7c5a-45e1-af24-e6726af501ea-secret-volume\") pod \"collect-profiles-29401665-xhlbn\" (UID: \"e47008dd-7c5a-45e1-af24-e6726af501ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.505800 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.712179 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn"] Nov 25 19:45:00 crc kubenswrapper[4775]: W1125 19:45:00.725623 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode47008dd_7c5a_45e1_af24_e6726af501ea.slice/crio-9e60cd2b008a16119686fc374c7d8d98ca2cf0218b65ad3f149eb40152bb1278 WatchSource:0}: Error finding container 9e60cd2b008a16119686fc374c7d8d98ca2cf0218b65ad3f149eb40152bb1278: Status 404 returned error can't find the container with id 9e60cd2b008a16119686fc374c7d8d98ca2cf0218b65ad3f149eb40152bb1278 Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.746954 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" event={"ID":"e47008dd-7c5a-45e1-af24-e6726af501ea","Type":"ContainerStarted","Data":"9e60cd2b008a16119686fc374c7d8d98ca2cf0218b65ad3f149eb40152bb1278"} Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.757045 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ntqwn"] Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.763131 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-ntqwn" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.772311 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.772445 4775 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8x4gl" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.772694 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.787174 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rl5x8"] Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.789432 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-rl5x8" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.791882 4775 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-nqr65" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.803500 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ntqwn"] Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.806771 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-vwq5q"] Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.808214 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-vwq5q" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.813434 4775 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tm2gd" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.824612 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rl5x8"] Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.835443 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-vwq5q"] Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.899847 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z75h\" (UniqueName: \"kubernetes.io/projected/5d9e5ddd-f0c5-4134-92bd-8e6e6022ed0d-kube-api-access-2z75h\") pod \"cert-manager-cainjector-7f985d654d-ntqwn\" (UID: \"5d9e5ddd-f0c5-4134-92bd-8e6e6022ed0d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ntqwn" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.899970 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rqj9\" (UniqueName: \"kubernetes.io/projected/63c18de1-e4dd-44f0-9b01-e8a3f3f6c238-kube-api-access-9rqj9\") pod \"cert-manager-webhook-5655c58dd6-vwq5q\" (UID: \"63c18de1-e4dd-44f0-9b01-e8a3f3f6c238\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-vwq5q" Nov 25 19:45:00 crc kubenswrapper[4775]: I1125 19:45:00.900029 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dv4l\" (UniqueName: \"kubernetes.io/projected/3ce20161-b6cf-4b36-83fe-61486d2e747f-kube-api-access-4dv4l\") pod \"cert-manager-5b446d88c5-rl5x8\" (UID: \"3ce20161-b6cf-4b36-83fe-61486d2e747f\") " pod="cert-manager/cert-manager-5b446d88c5-rl5x8" Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.001740 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z75h\" (UniqueName: \"kubernetes.io/projected/5d9e5ddd-f0c5-4134-92bd-8e6e6022ed0d-kube-api-access-2z75h\") pod \"cert-manager-cainjector-7f985d654d-ntqwn\" (UID: \"5d9e5ddd-f0c5-4134-92bd-8e6e6022ed0d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ntqwn" Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.001794 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rqj9\" (UniqueName: \"kubernetes.io/projected/63c18de1-e4dd-44f0-9b01-e8a3f3f6c238-kube-api-access-9rqj9\") pod \"cert-manager-webhook-5655c58dd6-vwq5q\" (UID: \"63c18de1-e4dd-44f0-9b01-e8a3f3f6c238\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-vwq5q" Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.001824 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dv4l\" (UniqueName: \"kubernetes.io/projected/3ce20161-b6cf-4b36-83fe-61486d2e747f-kube-api-access-4dv4l\") pod \"cert-manager-5b446d88c5-rl5x8\" (UID: \"3ce20161-b6cf-4b36-83fe-61486d2e747f\") " pod="cert-manager/cert-manager-5b446d88c5-rl5x8" Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.019743 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dv4l\" (UniqueName: \"kubernetes.io/projected/3ce20161-b6cf-4b36-83fe-61486d2e747f-kube-api-access-4dv4l\") pod \"cert-manager-5b446d88c5-rl5x8\" (UID: \"3ce20161-b6cf-4b36-83fe-61486d2e747f\") " pod="cert-manager/cert-manager-5b446d88c5-rl5x8" Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.019938 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rqj9\" (UniqueName: \"kubernetes.io/projected/63c18de1-e4dd-44f0-9b01-e8a3f3f6c238-kube-api-access-9rqj9\") pod \"cert-manager-webhook-5655c58dd6-vwq5q\" (UID: \"63c18de1-e4dd-44f0-9b01-e8a3f3f6c238\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-vwq5q" Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.021121 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z75h\" (UniqueName: \"kubernetes.io/projected/5d9e5ddd-f0c5-4134-92bd-8e6e6022ed0d-kube-api-access-2z75h\") pod \"cert-manager-cainjector-7f985d654d-ntqwn\" (UID: \"5d9e5ddd-f0c5-4134-92bd-8e6e6022ed0d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ntqwn" Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.084949 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-ntqwn" Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.134568 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-rl5x8" Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.143935 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-vwq5q" Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.375084 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rl5x8"] Nov 25 19:45:01 crc kubenswrapper[4775]: W1125 19:45:01.384671 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ce20161_b6cf_4b36_83fe_61486d2e747f.slice/crio-c2f98bc3bfc7a39d9b3ab0b013155bf94e93364dccc80874a9f543da20473a4a WatchSource:0}: Error finding container c2f98bc3bfc7a39d9b3ab0b013155bf94e93364dccc80874a9f543da20473a4a: Status 404 returned error can't find the container with id c2f98bc3bfc7a39d9b3ab0b013155bf94e93364dccc80874a9f543da20473a4a Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.389548 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.476860 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ntqwn"] Nov 25 19:45:01 crc kubenswrapper[4775]: W1125 19:45:01.480490 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d9e5ddd_f0c5_4134_92bd_8e6e6022ed0d.slice/crio-d6c9ee4a54102578187d8dff2422f8f30878769cd3a14a8aadd97622ef80661c WatchSource:0}: Error finding container d6c9ee4a54102578187d8dff2422f8f30878769cd3a14a8aadd97622ef80661c: Status 404 returned error can't find the container with id d6c9ee4a54102578187d8dff2422f8f30878769cd3a14a8aadd97622ef80661c Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.613236 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-vwq5q"] Nov 25 19:45:01 crc kubenswrapper[4775]: W1125 19:45:01.616854 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63c18de1_e4dd_44f0_9b01_e8a3f3f6c238.slice/crio-bfa48df68440687ed5c86ff1fae3add2d8e24bce512fabc4966cf86a31d3151d WatchSource:0}: Error finding container bfa48df68440687ed5c86ff1fae3add2d8e24bce512fabc4966cf86a31d3151d: Status 404 returned error can't find the container with id bfa48df68440687ed5c86ff1fae3add2d8e24bce512fabc4966cf86a31d3151d Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.754053 4775 generic.go:334] "Generic (PLEG): container finished" podID="e47008dd-7c5a-45e1-af24-e6726af501ea" containerID="dea22e944e91e4ef838ab530ff6a1bac1550ef63f34c2d1a2d3a81b8ff9fc7c8" exitCode=0 Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.754120 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" event={"ID":"e47008dd-7c5a-45e1-af24-e6726af501ea","Type":"ContainerDied","Data":"dea22e944e91e4ef838ab530ff6a1bac1550ef63f34c2d1a2d3a81b8ff9fc7c8"} Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.756151 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-vwq5q" event={"ID":"63c18de1-e4dd-44f0-9b01-e8a3f3f6c238","Type":"ContainerStarted","Data":"bfa48df68440687ed5c86ff1fae3add2d8e24bce512fabc4966cf86a31d3151d"} Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.757680 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-ntqwn" event={"ID":"5d9e5ddd-f0c5-4134-92bd-8e6e6022ed0d","Type":"ContainerStarted","Data":"d6c9ee4a54102578187d8dff2422f8f30878769cd3a14a8aadd97622ef80661c"} Nov 25 19:45:01 crc kubenswrapper[4775]: I1125 19:45:01.759117 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rl5x8" event={"ID":"3ce20161-b6cf-4b36-83fe-61486d2e747f","Type":"ContainerStarted","Data":"c2f98bc3bfc7a39d9b3ab0b013155bf94e93364dccc80874a9f543da20473a4a"} Nov 25 19:45:03 crc kubenswrapper[4775]: I1125 19:45:03.087216 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" Nov 25 19:45:03 crc kubenswrapper[4775]: I1125 19:45:03.231101 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk48c\" (UniqueName: \"kubernetes.io/projected/e47008dd-7c5a-45e1-af24-e6726af501ea-kube-api-access-xk48c\") pod \"e47008dd-7c5a-45e1-af24-e6726af501ea\" (UID: \"e47008dd-7c5a-45e1-af24-e6726af501ea\") " Nov 25 19:45:03 crc kubenswrapper[4775]: I1125 19:45:03.231168 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e47008dd-7c5a-45e1-af24-e6726af501ea-secret-volume\") pod \"e47008dd-7c5a-45e1-af24-e6726af501ea\" (UID: \"e47008dd-7c5a-45e1-af24-e6726af501ea\") " Nov 25 19:45:03 crc kubenswrapper[4775]: I1125 19:45:03.231279 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e47008dd-7c5a-45e1-af24-e6726af501ea-config-volume\") pod \"e47008dd-7c5a-45e1-af24-e6726af501ea\" (UID: \"e47008dd-7c5a-45e1-af24-e6726af501ea\") " Nov 25 19:45:03 crc kubenswrapper[4775]: I1125 19:45:03.232581 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e47008dd-7c5a-45e1-af24-e6726af501ea-config-volume" (OuterVolumeSpecName: "config-volume") pod "e47008dd-7c5a-45e1-af24-e6726af501ea" (UID: "e47008dd-7c5a-45e1-af24-e6726af501ea"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:45:03 crc kubenswrapper[4775]: I1125 19:45:03.236971 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47008dd-7c5a-45e1-af24-e6726af501ea-kube-api-access-xk48c" (OuterVolumeSpecName: "kube-api-access-xk48c") pod "e47008dd-7c5a-45e1-af24-e6726af501ea" (UID: "e47008dd-7c5a-45e1-af24-e6726af501ea"). InnerVolumeSpecName "kube-api-access-xk48c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:45:03 crc kubenswrapper[4775]: I1125 19:45:03.237001 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47008dd-7c5a-45e1-af24-e6726af501ea-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e47008dd-7c5a-45e1-af24-e6726af501ea" (UID: "e47008dd-7c5a-45e1-af24-e6726af501ea"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:45:03 crc kubenswrapper[4775]: I1125 19:45:03.333059 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e47008dd-7c5a-45e1-af24-e6726af501ea-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:03 crc kubenswrapper[4775]: I1125 19:45:03.333537 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk48c\" (UniqueName: \"kubernetes.io/projected/e47008dd-7c5a-45e1-af24-e6726af501ea-kube-api-access-xk48c\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:03 crc kubenswrapper[4775]: I1125 19:45:03.333573 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e47008dd-7c5a-45e1-af24-e6726af501ea-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:03 crc kubenswrapper[4775]: I1125 19:45:03.777848 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" event={"ID":"e47008dd-7c5a-45e1-af24-e6726af501ea","Type":"ContainerDied","Data":"9e60cd2b008a16119686fc374c7d8d98ca2cf0218b65ad3f149eb40152bb1278"} Nov 25 19:45:03 crc kubenswrapper[4775]: I1125 19:45:03.777922 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e60cd2b008a16119686fc374c7d8d98ca2cf0218b65ad3f149eb40152bb1278" Nov 25 19:45:03 crc kubenswrapper[4775]: I1125 19:45:03.778016 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn" Nov 25 19:45:05 crc kubenswrapper[4775]: I1125 19:45:05.795451 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-ntqwn" event={"ID":"5d9e5ddd-f0c5-4134-92bd-8e6e6022ed0d","Type":"ContainerStarted","Data":"8ccb3f9ae5595811c99aa739a8e2eea930c3fbd9092f73cf27a622598a490cdd"} Nov 25 19:45:05 crc kubenswrapper[4775]: I1125 19:45:05.796621 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rl5x8" event={"ID":"3ce20161-b6cf-4b36-83fe-61486d2e747f","Type":"ContainerStarted","Data":"1c56e1c3a76826f98fe140a5de4cd057a3b9e84367a36f50d8f69664d17795b9"} Nov 25 19:45:05 crc kubenswrapper[4775]: I1125 19:45:05.810958 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-ntqwn" podStartSLOduration=1.7900798359999999 podStartE2EDuration="5.810942598s" podCreationTimestamp="2025-11-25 19:45:00 +0000 UTC" firstStartedPulling="2025-11-25 19:45:01.482842107 +0000 UTC m=+683.399204473" lastFinishedPulling="2025-11-25 19:45:05.503704869 +0000 UTC m=+687.420067235" observedRunningTime="2025-11-25 19:45:05.807597767 +0000 UTC m=+687.723960134" watchObservedRunningTime="2025-11-25 19:45:05.810942598 +0000 UTC m=+687.727304964" Nov 25 19:45:05 crc kubenswrapper[4775]: I1125 19:45:05.823913 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-rl5x8" podStartSLOduration=1.754150463 podStartE2EDuration="5.823900539s" podCreationTimestamp="2025-11-25 19:45:00 +0000 UTC" firstStartedPulling="2025-11-25 19:45:01.389068175 +0000 UTC m=+683.305430541" lastFinishedPulling="2025-11-25 19:45:05.458818251 +0000 UTC m=+687.375180617" observedRunningTime="2025-11-25 19:45:05.821736801 +0000 UTC m=+687.738099177" watchObservedRunningTime="2025-11-25 19:45:05.823900539 +0000 UTC m=+687.740262905" Nov 25 19:45:06 crc kubenswrapper[4775]: I1125 19:45:06.806282 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-vwq5q" event={"ID":"63c18de1-e4dd-44f0-9b01-e8a3f3f6c238","Type":"ContainerStarted","Data":"4a5998ec1f79f21b0c4781719db6c26a4bb2ebf127e0ac163cef191831066c09"} Nov 25 19:45:06 crc kubenswrapper[4775]: I1125 19:45:06.838638 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-vwq5q" podStartSLOduration=2.012029384 podStartE2EDuration="6.838615109s" podCreationTimestamp="2025-11-25 19:45:00 +0000 UTC" firstStartedPulling="2025-11-25 19:45:01.620087418 +0000 UTC m=+683.536449784" lastFinishedPulling="2025-11-25 19:45:06.446673133 +0000 UTC m=+688.363035509" observedRunningTime="2025-11-25 19:45:06.837074168 +0000 UTC m=+688.753436564" watchObservedRunningTime="2025-11-25 19:45:06.838615109 +0000 UTC m=+688.754977505" Nov 25 19:45:07 crc kubenswrapper[4775]: I1125 19:45:07.813530 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-vwq5q" Nov 25 19:45:11 crc kubenswrapper[4775]: I1125 19:45:11.149039 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-vwq5q" Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.383099 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x28tq"] Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.384293 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovn-controller" containerID="cri-o://b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536" gracePeriod=30 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.390908 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="northd" containerID="cri-o://ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e" gracePeriod=30 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.391061 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="nbdb" containerID="cri-o://9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1" gracePeriod=30 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.391076 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovn-acl-logging" containerID="cri-o://30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b" gracePeriod=30 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.391049 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="sbdb" containerID="cri-o://52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8" gracePeriod=30 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.391113 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505" gracePeriod=30 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.391253 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="kube-rbac-proxy-node" containerID="cri-o://d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465" gracePeriod=30 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.437100 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" containerID="cri-o://ad9a521dc5be99aef9e604fc3390074741172291c184e8a62c3b539a30d8964e" gracePeriod=30 Nov 25 19:45:30 crc kubenswrapper[4775]: E1125 19:45:30.691603 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 25 19:45:30 crc kubenswrapper[4775]: E1125 19:45:30.693054 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 25 19:45:30 crc kubenswrapper[4775]: E1125 19:45:30.693635 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 25 19:45:30 crc kubenswrapper[4775]: E1125 19:45:30.695135 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 25 19:45:30 crc kubenswrapper[4775]: E1125 19:45:30.695222 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 25 19:45:30 crc kubenswrapper[4775]: E1125 19:45:30.695254 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="nbdb" Nov 25 19:45:30 crc kubenswrapper[4775]: E1125 19:45:30.696468 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 25 19:45:30 crc kubenswrapper[4775]: E1125 19:45:30.696504 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="sbdb" Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.971197 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qf2w_850f083c-ad86-47bb-8fd1-4f2a4a9e7831/kube-multus/2.log" Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.972056 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qf2w_850f083c-ad86-47bb-8fd1-4f2a4a9e7831/kube-multus/1.log" Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.972143 4775 generic.go:334] "Generic (PLEG): container finished" podID="850f083c-ad86-47bb-8fd1-4f2a4a9e7831" containerID="52ab40ee9cac20b78eb96a25e72b9de04e6dfb11304ceee78c10e6b026448e61" exitCode=2 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.972251 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qf2w" event={"ID":"850f083c-ad86-47bb-8fd1-4f2a4a9e7831","Type":"ContainerDied","Data":"52ab40ee9cac20b78eb96a25e72b9de04e6dfb11304ceee78c10e6b026448e61"} Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.972328 4775 scope.go:117] "RemoveContainer" containerID="0214a60a160bcf831db4a80d10761356a50ea831420fe32966eb42ba3de54426" Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.973060 4775 scope.go:117] "RemoveContainer" containerID="52ab40ee9cac20b78eb96a25e72b9de04e6dfb11304ceee78c10e6b026448e61" Nov 25 19:45:30 crc kubenswrapper[4775]: E1125 19:45:30.973408 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-8qf2w_openshift-multus(850f083c-ad86-47bb-8fd1-4f2a4a9e7831)\"" pod="openshift-multus/multus-8qf2w" podUID="850f083c-ad86-47bb-8fd1-4f2a4a9e7831" Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.978366 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/3.log" Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.983586 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovn-acl-logging/0.log" Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.984421 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovn-controller/0.log" Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985083 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerID="ad9a521dc5be99aef9e604fc3390074741172291c184e8a62c3b539a30d8964e" exitCode=0 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985123 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerID="52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8" exitCode=0 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985121 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"ad9a521dc5be99aef9e604fc3390074741172291c184e8a62c3b539a30d8964e"} Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985188 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8"} Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985220 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1"} Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985141 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerID="9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1" exitCode=0 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985255 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerID="ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e" exitCode=0 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985269 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerID="e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505" exitCode=0 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985281 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerID="d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465" exitCode=0 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985293 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerID="30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b" exitCode=143 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985306 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerID="b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536" exitCode=143 Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985336 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e"} Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985375 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505"} Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985396 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465"} Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985414 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b"} Nov 25 19:45:30 crc kubenswrapper[4775]: I1125 19:45:30.985432 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536"} Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.179405 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovnkube-controller/3.log" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.182998 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovn-acl-logging/0.log" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.183599 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovn-controller/0.log" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.184338 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.192228 4775 scope.go:117] "RemoveContainer" containerID="a54bd1922385c4b790d7c313314ccdd8ed15665b2d8a9529e8b307ca71509cb5" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259024 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nc9vg"] Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.259220 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="kubecfg-setup" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259231 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="kubecfg-setup" Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.259239 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259246 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.259253 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259259 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.259289 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259295 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.259304 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovn-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259309 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovn-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.259317 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259323 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.259335 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="sbdb" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259366 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="sbdb" Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.259379 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovn-acl-logging" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259385 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovn-acl-logging" Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.259397 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="kube-rbac-proxy-node" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259402 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="kube-rbac-proxy-node" Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.259410 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="northd" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259416 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="northd" Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.259444 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="nbdb" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259450 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="nbdb" Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.259458 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47008dd-7c5a-45e1-af24-e6726af501ea" containerName="collect-profiles" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259467 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47008dd-7c5a-45e1-af24-e6726af501ea" containerName="collect-profiles" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259626 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovn-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259639 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259700 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovn-acl-logging" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259711 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="nbdb" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259719 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259728 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259739 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47008dd-7c5a-45e1-af24-e6726af501ea" containerName="collect-profiles" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259747 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259784 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="sbdb" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259797 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="kube-rbac-proxy-node" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.259806 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="northd" Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.260001 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.260037 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: E1125 19:45:31.260047 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.260054 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.260205 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.260247 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" containerName="ovnkube-controller" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.262358 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278158 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-log-socket\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278194 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-run-netns\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278220 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-ovn\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278238 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-systemd-units\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278346 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovnkube-config\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278368 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-slash\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278397 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-run-ovn-kubernetes\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278416 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-cni-netd\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278449 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovn-node-metrics-cert\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278465 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-systemd\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278489 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-openvswitch\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278510 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-env-overrides\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278534 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-var-lib-openvswitch\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278555 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-kubelet\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278580 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-cni-bin\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278602 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-var-lib-cni-networks-ovn-kubernetes\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278604 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-slash" (OuterVolumeSpecName: "host-slash") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278633 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278640 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-node-log\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278691 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7q6f\" (UniqueName: \"kubernetes.io/projected/1b02c35a-be66-4cf6-afc0-12ddc2f74148-kube-api-access-h7q6f\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278716 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovnkube-script-lib\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278760 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-etc-openvswitch\") pod \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\" (UID: \"1b02c35a-be66-4cf6-afc0-12ddc2f74148\") " Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278989 4775 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.279012 4775 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-slash\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278659 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-log-socket" (OuterVolumeSpecName: "log-socket") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.279099 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-node-log" (OuterVolumeSpecName: "node-log") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278613 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278734 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278959 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.278997 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.279022 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.279038 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.279043 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.279052 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.279059 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.279061 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.279081 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.279481 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.279697 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.286133 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.287391 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b02c35a-be66-4cf6-afc0-12ddc2f74148-kube-api-access-h7q6f" (OuterVolumeSpecName: "kube-api-access-h7q6f") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "kube-api-access-h7q6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.309789 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "1b02c35a-be66-4cf6-afc0-12ddc2f74148" (UID: "1b02c35a-be66-4cf6-afc0-12ddc2f74148"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380037 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a13ce00-21c5-41d3-aaec-f98667df7819-ovn-node-metrics-cert\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380085 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-run-openvswitch\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380104 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-run-netns\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380126 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-run-ovn-kubernetes\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380142 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380164 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-run-ovn\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380235 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-log-socket\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380287 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w697x\" (UniqueName: \"kubernetes.io/projected/1a13ce00-21c5-41d3-aaec-f98667df7819-kube-api-access-w697x\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380324 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a13ce00-21c5-41d3-aaec-f98667df7819-env-overrides\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380360 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-node-log\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380377 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-systemd-units\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380391 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-cni-netd\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380406 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a13ce00-21c5-41d3-aaec-f98667df7819-ovnkube-config\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380422 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-etc-openvswitch\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380439 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-cni-bin\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380455 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-run-systemd\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380498 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-var-lib-openvswitch\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380536 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-slash\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380588 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-kubelet\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380615 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1a13ce00-21c5-41d3-aaec-f98667df7819-ovnkube-script-lib\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380736 4775 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-log-socket\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380757 4775 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380777 4775 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380795 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380812 4775 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380828 4775 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380845 4775 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380861 4775 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380876 4775 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380893 4775 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380910 4775 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380925 4775 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380942 4775 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380958 4775 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380974 4775 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-node-log\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.380992 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7q6f\" (UniqueName: \"kubernetes.io/projected/1b02c35a-be66-4cf6-afc0-12ddc2f74148-kube-api-access-h7q6f\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.381008 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1b02c35a-be66-4cf6-afc0-12ddc2f74148-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.381024 4775 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b02c35a-be66-4cf6-afc0-12ddc2f74148-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482163 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-node-log\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482233 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-systemd-units\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482257 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-node-log\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482267 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-cni-netd\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482296 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-systemd-units\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482302 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a13ce00-21c5-41d3-aaec-f98667df7819-ovnkube-config\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482319 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-cni-netd\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482334 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-etc-openvswitch\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482364 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-cni-bin\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482392 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-run-systemd\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482431 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-var-lib-openvswitch\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482469 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-slash\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482490 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-cni-bin\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482511 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1a13ce00-21c5-41d3-aaec-f98667df7819-ovnkube-script-lib\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482545 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-kubelet\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482562 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-etc-openvswitch\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482602 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-slash\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482612 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a13ce00-21c5-41d3-aaec-f98667df7819-ovn-node-metrics-cert\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482643 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-run-systemd\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482643 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-run-openvswitch\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482708 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-var-lib-openvswitch\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482721 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-run-netns\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482748 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-kubelet\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482767 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-run-ovn-kubernetes\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482798 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482842 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-run-ovn\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482870 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-log-socket\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482895 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a13ce00-21c5-41d3-aaec-f98667df7819-ovnkube-config\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482900 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w697x\" (UniqueName: \"kubernetes.io/projected/1a13ce00-21c5-41d3-aaec-f98667df7819-kube-api-access-w697x\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482929 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-run-ovn-kubernetes\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.482930 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a13ce00-21c5-41d3-aaec-f98667df7819-env-overrides\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.483059 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-run-openvswitch\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.483093 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.483117 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-host-run-netns\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.483180 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-run-ovn\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.483210 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1a13ce00-21c5-41d3-aaec-f98667df7819-log-socket\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.483630 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1a13ce00-21c5-41d3-aaec-f98667df7819-ovnkube-script-lib\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.483835 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a13ce00-21c5-41d3-aaec-f98667df7819-env-overrides\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.487726 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a13ce00-21c5-41d3-aaec-f98667df7819-ovn-node-metrics-cert\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.510227 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w697x\" (UniqueName: \"kubernetes.io/projected/1a13ce00-21c5-41d3-aaec-f98667df7819-kube-api-access-w697x\") pod \"ovnkube-node-nc9vg\" (UID: \"1a13ce00-21c5-41d3-aaec-f98667df7819\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.580276 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:31 crc kubenswrapper[4775]: I1125 19:45:31.999153 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovn-acl-logging/0.log" Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.000985 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x28tq_1b02c35a-be66-4cf6-afc0-12ddc2f74148/ovn-controller/0.log" Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.001609 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" event={"ID":"1b02c35a-be66-4cf6-afc0-12ddc2f74148","Type":"ContainerDied","Data":"efd7930e4c966536d20eee32743a0eb112a93a76fc1aab48a2c03e46779aff1b"} Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.001693 4775 scope.go:117] "RemoveContainer" containerID="ad9a521dc5be99aef9e604fc3390074741172291c184e8a62c3b539a30d8964e" Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.001704 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x28tq" Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.004560 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qf2w_850f083c-ad86-47bb-8fd1-4f2a4a9e7831/kube-multus/2.log" Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.007434 4775 generic.go:334] "Generic (PLEG): container finished" podID="1a13ce00-21c5-41d3-aaec-f98667df7819" containerID="d422362a5bf705ef5d17d20cbbc6d911c287c33ed6d7060b2bc95feed5bac50f" exitCode=0 Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.007513 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" event={"ID":"1a13ce00-21c5-41d3-aaec-f98667df7819","Type":"ContainerDied","Data":"d422362a5bf705ef5d17d20cbbc6d911c287c33ed6d7060b2bc95feed5bac50f"} Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.007561 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" event={"ID":"1a13ce00-21c5-41d3-aaec-f98667df7819","Type":"ContainerStarted","Data":"c1826f9a432aafae517bbf0b5ec1434a62ab8171d77e2f9f3a2ac82c1b848415"} Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.026374 4775 scope.go:117] "RemoveContainer" containerID="52728fd405f82504add3a27c4fa7a46c4fafd7c6940fb388369046d67ba7a2d8" Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.052247 4775 scope.go:117] "RemoveContainer" containerID="9a6b570631291c6cade65ca84f84f2283341a8ae126da31da78058ac76be08d1" Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.074055 4775 scope.go:117] "RemoveContainer" containerID="ae0b9378e0b2b234784469a226b1f0473fa828227172389d2060467df3c71e8e" Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.116934 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x28tq"] Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.117433 4775 scope.go:117] "RemoveContainer" containerID="e05de2fa472921cfce5ec1a6f1d47a92e437a46411156bdeea1a4500ddb8e505" Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.125932 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x28tq"] Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.132969 4775 scope.go:117] "RemoveContainer" containerID="d5fd12406b817ab2c83f360b2938e7bce8b90802285e74b64861b9b83fc31465" Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.149018 4775 scope.go:117] "RemoveContainer" containerID="30ee89f0aa588342c057810d30b67508d3b1d4fea934f452c92f14695516d97b" Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.163144 4775 scope.go:117] "RemoveContainer" containerID="b0eb75b59d578b7af3193a82d45f65c8eb75bfde2e72f1acff00508f9614f536" Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.176736 4775 scope.go:117] "RemoveContainer" containerID="114822bc69c221939960d9abc0fc847987e26ac73a39d125ca57d4d0589a2356" Nov 25 19:45:32 crc kubenswrapper[4775]: I1125 19:45:32.859198 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b02c35a-be66-4cf6-afc0-12ddc2f74148" path="/var/lib/kubelet/pods/1b02c35a-be66-4cf6-afc0-12ddc2f74148/volumes" Nov 25 19:45:33 crc kubenswrapper[4775]: I1125 19:45:33.021485 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" event={"ID":"1a13ce00-21c5-41d3-aaec-f98667df7819","Type":"ContainerStarted","Data":"05ce0722a9016b55fdd95752048470e4f8b0b8c991939d463cb2e9250d200248"} Nov 25 19:45:33 crc kubenswrapper[4775]: I1125 19:45:33.021535 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" event={"ID":"1a13ce00-21c5-41d3-aaec-f98667df7819","Type":"ContainerStarted","Data":"14d51f33cffd63b64ac6403e0ba87d1afd144555b7d443099a44327ddf74676e"} Nov 25 19:45:33 crc kubenswrapper[4775]: I1125 19:45:33.021553 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" event={"ID":"1a13ce00-21c5-41d3-aaec-f98667df7819","Type":"ContainerStarted","Data":"fa1e027dc2d817d17c605cf4de59c54c89dd564ee59cdf41e98e0d7cfc8710b5"} Nov 25 19:45:33 crc kubenswrapper[4775]: I1125 19:45:33.021568 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" event={"ID":"1a13ce00-21c5-41d3-aaec-f98667df7819","Type":"ContainerStarted","Data":"6885fd30529139d227aa794aa236072111eb9c767b1268565fd681467ecf9348"} Nov 25 19:45:34 crc kubenswrapper[4775]: I1125 19:45:34.035005 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" event={"ID":"1a13ce00-21c5-41d3-aaec-f98667df7819","Type":"ContainerStarted","Data":"93c7324ba351441dc7a53f8d31cd6f765106bfba6701a7a4ab8334da160837ed"} Nov 25 19:45:34 crc kubenswrapper[4775]: I1125 19:45:34.035337 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" event={"ID":"1a13ce00-21c5-41d3-aaec-f98667df7819","Type":"ContainerStarted","Data":"f2d4998cfd30cd258f474b4f94ac3f371612a4bc14fc338f3a0efd6f487ffe43"} Nov 25 19:45:36 crc kubenswrapper[4775]: I1125 19:45:36.053415 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" event={"ID":"1a13ce00-21c5-41d3-aaec-f98667df7819","Type":"ContainerStarted","Data":"fcf42d1fed85ed13a224b0398c971ab6e856cfb04d1ea2e0a91c9cb58a248b30"} Nov 25 19:45:38 crc kubenswrapper[4775]: I1125 19:45:38.071638 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" event={"ID":"1a13ce00-21c5-41d3-aaec-f98667df7819","Type":"ContainerStarted","Data":"65331c7b6b70824ea94f25a4b73e6c9becbf4d665337c05e58326678c6b0bfff"} Nov 25 19:45:38 crc kubenswrapper[4775]: I1125 19:45:38.072478 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:38 crc kubenswrapper[4775]: I1125 19:45:38.072496 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:38 crc kubenswrapper[4775]: I1125 19:45:38.108609 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:38 crc kubenswrapper[4775]: I1125 19:45:38.189008 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" podStartSLOduration=7.188993585 podStartE2EDuration="7.188993585s" podCreationTimestamp="2025-11-25 19:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:45:38.132702294 +0000 UTC m=+720.049064670" watchObservedRunningTime="2025-11-25 19:45:38.188993585 +0000 UTC m=+720.105355951" Nov 25 19:45:39 crc kubenswrapper[4775]: I1125 19:45:39.079626 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:39 crc kubenswrapper[4775]: I1125 19:45:39.127283 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:45:41 crc kubenswrapper[4775]: I1125 19:45:41.070801 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:45:41 crc kubenswrapper[4775]: I1125 19:45:41.070875 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:45:45 crc kubenswrapper[4775]: I1125 19:45:45.848127 4775 scope.go:117] "RemoveContainer" containerID="52ab40ee9cac20b78eb96a25e72b9de04e6dfb11304ceee78c10e6b026448e61" Nov 25 19:45:45 crc kubenswrapper[4775]: E1125 19:45:45.850726 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-8qf2w_openshift-multus(850f083c-ad86-47bb-8fd1-4f2a4a9e7831)\"" pod="openshift-multus/multus-8qf2w" podUID="850f083c-ad86-47bb-8fd1-4f2a4a9e7831" Nov 25 19:45:52 crc kubenswrapper[4775]: I1125 19:45:52.944722 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2"] Nov 25 19:45:52 crc kubenswrapper[4775]: I1125 19:45:52.951327 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:52 crc kubenswrapper[4775]: I1125 19:45:52.960270 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 19:45:52 crc kubenswrapper[4775]: I1125 19:45:52.967097 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2"] Nov 25 19:45:52 crc kubenswrapper[4775]: I1125 19:45:52.993566 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2\" (UID: \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:52 crc kubenswrapper[4775]: I1125 19:45:52.993786 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj94b\" (UniqueName: \"kubernetes.io/projected/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-kube-api-access-xj94b\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2\" (UID: \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:52 crc kubenswrapper[4775]: I1125 19:45:52.993980 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2\" (UID: \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:53 crc kubenswrapper[4775]: I1125 19:45:53.095812 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2\" (UID: \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:53 crc kubenswrapper[4775]: I1125 19:45:53.096009 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2\" (UID: \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:53 crc kubenswrapper[4775]: I1125 19:45:53.096056 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj94b\" (UniqueName: \"kubernetes.io/projected/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-kube-api-access-xj94b\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2\" (UID: \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:53 crc kubenswrapper[4775]: I1125 19:45:53.096623 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2\" (UID: \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:53 crc kubenswrapper[4775]: I1125 19:45:53.096806 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2\" (UID: \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:53 crc kubenswrapper[4775]: I1125 19:45:53.128857 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj94b\" (UniqueName: \"kubernetes.io/projected/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-kube-api-access-xj94b\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2\" (UID: \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:53 crc kubenswrapper[4775]: I1125 19:45:53.289876 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:53 crc kubenswrapper[4775]: E1125 19:45:53.320800 4775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_openshift-marketplace_a8f57ee2-05a5-41e8-8d84-9f0b746ef461_0(347d0522b5c7f50f53bda16412874b35660f501aa83047291e5cfe5566b041a0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 19:45:53 crc kubenswrapper[4775]: E1125 19:45:53.320903 4775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_openshift-marketplace_a8f57ee2-05a5-41e8-8d84-9f0b746ef461_0(347d0522b5c7f50f53bda16412874b35660f501aa83047291e5cfe5566b041a0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:53 crc kubenswrapper[4775]: E1125 19:45:53.320941 4775 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_openshift-marketplace_a8f57ee2-05a5-41e8-8d84-9f0b746ef461_0(347d0522b5c7f50f53bda16412874b35660f501aa83047291e5cfe5566b041a0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:53 crc kubenswrapper[4775]: E1125 19:45:53.321006 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_openshift-marketplace(a8f57ee2-05a5-41e8-8d84-9f0b746ef461)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_openshift-marketplace(a8f57ee2-05a5-41e8-8d84-9f0b746ef461)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_openshift-marketplace_a8f57ee2-05a5-41e8-8d84-9f0b746ef461_0(347d0522b5c7f50f53bda16412874b35660f501aa83047291e5cfe5566b041a0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" podUID="a8f57ee2-05a5-41e8-8d84-9f0b746ef461" Nov 25 19:45:54 crc kubenswrapper[4775]: I1125 19:45:54.194995 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:54 crc kubenswrapper[4775]: I1125 19:45:54.195545 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:54 crc kubenswrapper[4775]: E1125 19:45:54.225885 4775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_openshift-marketplace_a8f57ee2-05a5-41e8-8d84-9f0b746ef461_0(0f081448d877de3d587a155655f79b7ae0844208c80fd92868c7b86bac3d245e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 19:45:54 crc kubenswrapper[4775]: E1125 19:45:54.225954 4775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_openshift-marketplace_a8f57ee2-05a5-41e8-8d84-9f0b746ef461_0(0f081448d877de3d587a155655f79b7ae0844208c80fd92868c7b86bac3d245e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:54 crc kubenswrapper[4775]: E1125 19:45:54.226013 4775 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_openshift-marketplace_a8f57ee2-05a5-41e8-8d84-9f0b746ef461_0(0f081448d877de3d587a155655f79b7ae0844208c80fd92868c7b86bac3d245e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:45:54 crc kubenswrapper[4775]: E1125 19:45:54.226062 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_openshift-marketplace(a8f57ee2-05a5-41e8-8d84-9f0b746ef461)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_openshift-marketplace(a8f57ee2-05a5-41e8-8d84-9f0b746ef461)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_openshift-marketplace_a8f57ee2-05a5-41e8-8d84-9f0b746ef461_0(0f081448d877de3d587a155655f79b7ae0844208c80fd92868c7b86bac3d245e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" podUID="a8f57ee2-05a5-41e8-8d84-9f0b746ef461" Nov 25 19:46:00 crc kubenswrapper[4775]: I1125 19:46:00.847055 4775 scope.go:117] "RemoveContainer" containerID="52ab40ee9cac20b78eb96a25e72b9de04e6dfb11304ceee78c10e6b026448e61" Nov 25 19:46:01 crc kubenswrapper[4775]: I1125 19:46:01.250155 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qf2w_850f083c-ad86-47bb-8fd1-4f2a4a9e7831/kube-multus/2.log" Nov 25 19:46:01 crc kubenswrapper[4775]: I1125 19:46:01.250297 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qf2w" event={"ID":"850f083c-ad86-47bb-8fd1-4f2a4a9e7831","Type":"ContainerStarted","Data":"30e8af3fe5d2380b665a087edec31f3648deec9b15c4a0a550d7ca80d7f8c051"} Nov 25 19:46:01 crc kubenswrapper[4775]: I1125 19:46:01.620766 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nc9vg" Nov 25 19:46:06 crc kubenswrapper[4775]: I1125 19:46:06.846514 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:46:06 crc kubenswrapper[4775]: I1125 19:46:06.847110 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:46:07 crc kubenswrapper[4775]: I1125 19:46:07.067972 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2"] Nov 25 19:46:07 crc kubenswrapper[4775]: I1125 19:46:07.296268 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" event={"ID":"a8f57ee2-05a5-41e8-8d84-9f0b746ef461","Type":"ContainerStarted","Data":"5d402398e4969c1dec32f54ffef6e8b4f7bc0b0c0493fe135d5eb1b70fac2b59"} Nov 25 19:46:07 crc kubenswrapper[4775]: I1125 19:46:07.296389 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" event={"ID":"a8f57ee2-05a5-41e8-8d84-9f0b746ef461","Type":"ContainerStarted","Data":"222b5fb7c78ace77502778e75e7eb96acef8394d54f2193fa5b81b1ededce732"} Nov 25 19:46:08 crc kubenswrapper[4775]: I1125 19:46:08.304614 4775 generic.go:334] "Generic (PLEG): container finished" podID="a8f57ee2-05a5-41e8-8d84-9f0b746ef461" containerID="5d402398e4969c1dec32f54ffef6e8b4f7bc0b0c0493fe135d5eb1b70fac2b59" exitCode=0 Nov 25 19:46:08 crc kubenswrapper[4775]: I1125 19:46:08.304716 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" event={"ID":"a8f57ee2-05a5-41e8-8d84-9f0b746ef461","Type":"ContainerDied","Data":"5d402398e4969c1dec32f54ffef6e8b4f7bc0b0c0493fe135d5eb1b70fac2b59"} Nov 25 19:46:10 crc kubenswrapper[4775]: I1125 19:46:10.320838 4775 generic.go:334] "Generic (PLEG): container finished" podID="a8f57ee2-05a5-41e8-8d84-9f0b746ef461" containerID="778af672a606b796adbf504f78a2ed68269ab792cdd255ff7e04fad22c3b683c" exitCode=0 Nov 25 19:46:10 crc kubenswrapper[4775]: I1125 19:46:10.320962 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" event={"ID":"a8f57ee2-05a5-41e8-8d84-9f0b746ef461","Type":"ContainerDied","Data":"778af672a606b796adbf504f78a2ed68269ab792cdd255ff7e04fad22c3b683c"} Nov 25 19:46:11 crc kubenswrapper[4775]: I1125 19:46:11.070554 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:46:11 crc kubenswrapper[4775]: I1125 19:46:11.070989 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:46:11 crc kubenswrapper[4775]: I1125 19:46:11.336688 4775 generic.go:334] "Generic (PLEG): container finished" podID="a8f57ee2-05a5-41e8-8d84-9f0b746ef461" containerID="6b93b8c67517e6d11c18292917ae7002d4382c8bb94e2bc6fe2e441fc6b41acf" exitCode=0 Nov 25 19:46:11 crc kubenswrapper[4775]: I1125 19:46:11.336816 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" event={"ID":"a8f57ee2-05a5-41e8-8d84-9f0b746ef461","Type":"ContainerDied","Data":"6b93b8c67517e6d11c18292917ae7002d4382c8bb94e2bc6fe2e441fc6b41acf"} Nov 25 19:46:12 crc kubenswrapper[4775]: I1125 19:46:12.634207 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:46:12 crc kubenswrapper[4775]: I1125 19:46:12.815941 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-util\") pod \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\" (UID: \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\") " Nov 25 19:46:12 crc kubenswrapper[4775]: I1125 19:46:12.816020 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-bundle\") pod \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\" (UID: \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\") " Nov 25 19:46:12 crc kubenswrapper[4775]: I1125 19:46:12.816740 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-bundle" (OuterVolumeSpecName: "bundle") pod "a8f57ee2-05a5-41e8-8d84-9f0b746ef461" (UID: "a8f57ee2-05a5-41e8-8d84-9f0b746ef461"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:46:12 crc kubenswrapper[4775]: I1125 19:46:12.816831 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj94b\" (UniqueName: \"kubernetes.io/projected/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-kube-api-access-xj94b\") pod \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\" (UID: \"a8f57ee2-05a5-41e8-8d84-9f0b746ef461\") " Nov 25 19:46:12 crc kubenswrapper[4775]: I1125 19:46:12.817268 4775 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:46:12 crc kubenswrapper[4775]: I1125 19:46:12.826072 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-kube-api-access-xj94b" (OuterVolumeSpecName: "kube-api-access-xj94b") pod "a8f57ee2-05a5-41e8-8d84-9f0b746ef461" (UID: "a8f57ee2-05a5-41e8-8d84-9f0b746ef461"). InnerVolumeSpecName "kube-api-access-xj94b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:46:12 crc kubenswrapper[4775]: I1125 19:46:12.918334 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj94b\" (UniqueName: \"kubernetes.io/projected/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-kube-api-access-xj94b\") on node \"crc\" DevicePath \"\"" Nov 25 19:46:13 crc kubenswrapper[4775]: I1125 19:46:13.019566 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-util" (OuterVolumeSpecName: "util") pod "a8f57ee2-05a5-41e8-8d84-9f0b746ef461" (UID: "a8f57ee2-05a5-41e8-8d84-9f0b746ef461"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:46:13 crc kubenswrapper[4775]: I1125 19:46:13.120693 4775 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8f57ee2-05a5-41e8-8d84-9f0b746ef461-util\") on node \"crc\" DevicePath \"\"" Nov 25 19:46:13 crc kubenswrapper[4775]: I1125 19:46:13.354064 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" event={"ID":"a8f57ee2-05a5-41e8-8d84-9f0b746ef461","Type":"ContainerDied","Data":"222b5fb7c78ace77502778e75e7eb96acef8394d54f2193fa5b81b1ededce732"} Nov 25 19:46:13 crc kubenswrapper[4775]: I1125 19:46:13.354127 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="222b5fb7c78ace77502778e75e7eb96acef8394d54f2193fa5b81b1ededce732" Nov 25 19:46:13 crc kubenswrapper[4775]: I1125 19:46:13.354231 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2" Nov 25 19:46:14 crc kubenswrapper[4775]: I1125 19:46:14.131874 4775 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.663500 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-d6clh"] Nov 25 19:46:19 crc kubenswrapper[4775]: E1125 19:46:19.664283 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f57ee2-05a5-41e8-8d84-9f0b746ef461" containerName="util" Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.664299 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f57ee2-05a5-41e8-8d84-9f0b746ef461" containerName="util" Nov 25 19:46:19 crc kubenswrapper[4775]: E1125 19:46:19.664312 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f57ee2-05a5-41e8-8d84-9f0b746ef461" containerName="pull" Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.664319 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f57ee2-05a5-41e8-8d84-9f0b746ef461" containerName="pull" Nov 25 19:46:19 crc kubenswrapper[4775]: E1125 19:46:19.664338 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f57ee2-05a5-41e8-8d84-9f0b746ef461" containerName="extract" Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.664346 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f57ee2-05a5-41e8-8d84-9f0b746ef461" containerName="extract" Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.664470 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f57ee2-05a5-41e8-8d84-9f0b746ef461" containerName="extract" Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.664979 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-d6clh" Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.667847 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.668051 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.667902 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-s8v4f" Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.682073 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-d6clh"] Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.808736 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zxhv\" (UniqueName: \"kubernetes.io/projected/ba33ec42-80cf-4999-9cb7-19b3aed25d86-kube-api-access-9zxhv\") pod \"nmstate-operator-557fdffb88-d6clh\" (UID: \"ba33ec42-80cf-4999-9cb7-19b3aed25d86\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-d6clh" Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.910150 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zxhv\" (UniqueName: \"kubernetes.io/projected/ba33ec42-80cf-4999-9cb7-19b3aed25d86-kube-api-access-9zxhv\") pod \"nmstate-operator-557fdffb88-d6clh\" (UID: \"ba33ec42-80cf-4999-9cb7-19b3aed25d86\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-d6clh" Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.939579 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zxhv\" (UniqueName: \"kubernetes.io/projected/ba33ec42-80cf-4999-9cb7-19b3aed25d86-kube-api-access-9zxhv\") pod \"nmstate-operator-557fdffb88-d6clh\" (UID: \"ba33ec42-80cf-4999-9cb7-19b3aed25d86\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-d6clh" Nov 25 19:46:19 crc kubenswrapper[4775]: I1125 19:46:19.986638 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-d6clh" Nov 25 19:46:20 crc kubenswrapper[4775]: I1125 19:46:20.417471 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-d6clh"] Nov 25 19:46:21 crc kubenswrapper[4775]: I1125 19:46:21.402262 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-d6clh" event={"ID":"ba33ec42-80cf-4999-9cb7-19b3aed25d86","Type":"ContainerStarted","Data":"8a656104d0a99c2ac9158bfc91b1554849495fdeeebd9c9edcbbc5103b6af11e"} Nov 25 19:46:23 crc kubenswrapper[4775]: I1125 19:46:23.416468 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-d6clh" event={"ID":"ba33ec42-80cf-4999-9cb7-19b3aed25d86","Type":"ContainerStarted","Data":"5ef1fb66deb500397888a12f587cf6abbf484f6006b9941be77e2aae79c5a425"} Nov 25 19:46:23 crc kubenswrapper[4775]: I1125 19:46:23.438241 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-d6clh" podStartSLOduration=2.309885872 podStartE2EDuration="4.438212947s" podCreationTimestamp="2025-11-25 19:46:19 +0000 UTC" firstStartedPulling="2025-11-25 19:46:20.438493594 +0000 UTC m=+762.354855960" lastFinishedPulling="2025-11-25 19:46:22.566820649 +0000 UTC m=+764.483183035" observedRunningTime="2025-11-25 19:46:23.43249769 +0000 UTC m=+765.348860096" watchObservedRunningTime="2025-11-25 19:46:23.438212947 +0000 UTC m=+765.354575353" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.292897 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-zht2b"] Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.294065 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zht2b" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.296341 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ps8b7" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.306593 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws"] Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.307440 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.312095 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.326020 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-glq7r"] Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.327001 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.341750 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws"] Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.369516 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-zht2b"] Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.380750 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b5dfbfdb-3ed8-442b-82c4-4cb389e18670-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-t5zws\" (UID: \"b5dfbfdb-3ed8-442b-82c4-4cb389e18670\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.380843 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c972f926-912d-49e8-8533-10045e2263da-nmstate-lock\") pod \"nmstate-handler-glq7r\" (UID: \"c972f926-912d-49e8-8533-10045e2263da\") " pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.380886 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c972f926-912d-49e8-8533-10045e2263da-ovs-socket\") pod \"nmstate-handler-glq7r\" (UID: \"c972f926-912d-49e8-8533-10045e2263da\") " pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.380971 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbbwc\" (UniqueName: \"kubernetes.io/projected/7c6692fe-ee5b-431f-ab39-cb684d304bc1-kube-api-access-bbbwc\") pod \"nmstate-metrics-5dcf9c57c5-zht2b\" (UID: \"7c6692fe-ee5b-431f-ab39-cb684d304bc1\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zht2b" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.381067 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxgnf\" (UniqueName: \"kubernetes.io/projected/c972f926-912d-49e8-8533-10045e2263da-kube-api-access-jxgnf\") pod \"nmstate-handler-glq7r\" (UID: \"c972f926-912d-49e8-8533-10045e2263da\") " pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.381135 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzh4s\" (UniqueName: \"kubernetes.io/projected/b5dfbfdb-3ed8-442b-82c4-4cb389e18670-kube-api-access-gzh4s\") pod \"nmstate-webhook-6b89b748d8-t5zws\" (UID: \"b5dfbfdb-3ed8-442b-82c4-4cb389e18670\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.381218 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c972f926-912d-49e8-8533-10045e2263da-dbus-socket\") pod \"nmstate-handler-glq7r\" (UID: \"c972f926-912d-49e8-8533-10045e2263da\") " pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.443008 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6"] Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.443850 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.445752 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.446008 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.448948 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wwtb7" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.454258 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6"] Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.482741 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbbwc\" (UniqueName: \"kubernetes.io/projected/7c6692fe-ee5b-431f-ab39-cb684d304bc1-kube-api-access-bbbwc\") pod \"nmstate-metrics-5dcf9c57c5-zht2b\" (UID: \"7c6692fe-ee5b-431f-ab39-cb684d304bc1\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zht2b" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.482783 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxgnf\" (UniqueName: \"kubernetes.io/projected/c972f926-912d-49e8-8533-10045e2263da-kube-api-access-jxgnf\") pod \"nmstate-handler-glq7r\" (UID: \"c972f926-912d-49e8-8533-10045e2263da\") " pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.482809 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzh4s\" (UniqueName: \"kubernetes.io/projected/b5dfbfdb-3ed8-442b-82c4-4cb389e18670-kube-api-access-gzh4s\") pod \"nmstate-webhook-6b89b748d8-t5zws\" (UID: \"b5dfbfdb-3ed8-442b-82c4-4cb389e18670\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.482832 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c972f926-912d-49e8-8533-10045e2263da-dbus-socket\") pod \"nmstate-handler-glq7r\" (UID: \"c972f926-912d-49e8-8533-10045e2263da\") " pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.482860 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b486f12c-2fa3-4826-a246-2f805253df99-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-bz2l6\" (UID: \"b486f12c-2fa3-4826-a246-2f805253df99\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.482893 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b5dfbfdb-3ed8-442b-82c4-4cb389e18670-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-t5zws\" (UID: \"b5dfbfdb-3ed8-442b-82c4-4cb389e18670\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.482918 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b486f12c-2fa3-4826-a246-2f805253df99-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-bz2l6\" (UID: \"b486f12c-2fa3-4826-a246-2f805253df99\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.482941 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c972f926-912d-49e8-8533-10045e2263da-nmstate-lock\") pod \"nmstate-handler-glq7r\" (UID: \"c972f926-912d-49e8-8533-10045e2263da\") " pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.482962 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c972f926-912d-49e8-8533-10045e2263da-ovs-socket\") pod \"nmstate-handler-glq7r\" (UID: \"c972f926-912d-49e8-8533-10045e2263da\") " pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.482982 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb8t7\" (UniqueName: \"kubernetes.io/projected/b486f12c-2fa3-4826-a246-2f805253df99-kube-api-access-rb8t7\") pod \"nmstate-console-plugin-5874bd7bc5-bz2l6\" (UID: \"b486f12c-2fa3-4826-a246-2f805253df99\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.483310 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c972f926-912d-49e8-8533-10045e2263da-nmstate-lock\") pod \"nmstate-handler-glq7r\" (UID: \"c972f926-912d-49e8-8533-10045e2263da\") " pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.483694 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c972f926-912d-49e8-8533-10045e2263da-dbus-socket\") pod \"nmstate-handler-glq7r\" (UID: \"c972f926-912d-49e8-8533-10045e2263da\") " pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.483273 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c972f926-912d-49e8-8533-10045e2263da-ovs-socket\") pod \"nmstate-handler-glq7r\" (UID: \"c972f926-912d-49e8-8533-10045e2263da\") " pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.494761 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b5dfbfdb-3ed8-442b-82c4-4cb389e18670-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-t5zws\" (UID: \"b5dfbfdb-3ed8-442b-82c4-4cb389e18670\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.504527 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzh4s\" (UniqueName: \"kubernetes.io/projected/b5dfbfdb-3ed8-442b-82c4-4cb389e18670-kube-api-access-gzh4s\") pod \"nmstate-webhook-6b89b748d8-t5zws\" (UID: \"b5dfbfdb-3ed8-442b-82c4-4cb389e18670\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.505185 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxgnf\" (UniqueName: \"kubernetes.io/projected/c972f926-912d-49e8-8533-10045e2263da-kube-api-access-jxgnf\") pod \"nmstate-handler-glq7r\" (UID: \"c972f926-912d-49e8-8533-10045e2263da\") " pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.507312 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbbwc\" (UniqueName: \"kubernetes.io/projected/7c6692fe-ee5b-431f-ab39-cb684d304bc1-kube-api-access-bbbwc\") pod \"nmstate-metrics-5dcf9c57c5-zht2b\" (UID: \"7c6692fe-ee5b-431f-ab39-cb684d304bc1\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zht2b" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.583895 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b486f12c-2fa3-4826-a246-2f805253df99-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-bz2l6\" (UID: \"b486f12c-2fa3-4826-a246-2f805253df99\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.584447 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b486f12c-2fa3-4826-a246-2f805253df99-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-bz2l6\" (UID: \"b486f12c-2fa3-4826-a246-2f805253df99\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.584499 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb8t7\" (UniqueName: \"kubernetes.io/projected/b486f12c-2fa3-4826-a246-2f805253df99-kube-api-access-rb8t7\") pod \"nmstate-console-plugin-5874bd7bc5-bz2l6\" (UID: \"b486f12c-2fa3-4826-a246-2f805253df99\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.585702 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b486f12c-2fa3-4826-a246-2f805253df99-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-bz2l6\" (UID: \"b486f12c-2fa3-4826-a246-2f805253df99\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.588159 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b486f12c-2fa3-4826-a246-2f805253df99-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-bz2l6\" (UID: \"b486f12c-2fa3-4826-a246-2f805253df99\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.603388 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-c76489864-9qkdn"] Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.604044 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.607680 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb8t7\" (UniqueName: \"kubernetes.io/projected/b486f12c-2fa3-4826-a246-2f805253df99-kube-api-access-rb8t7\") pod \"nmstate-console-plugin-5874bd7bc5-bz2l6\" (UID: \"b486f12c-2fa3-4826-a246-2f805253df99\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.614786 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zht2b" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.620784 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c76489864-9qkdn"] Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.639336 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.650985 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:24 crc kubenswrapper[4775]: W1125 19:46:24.674976 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc972f926_912d_49e8_8533_10045e2263da.slice/crio-8d76d6b4b022ef172cca45f74cbdd3bc5b72e80ba3ba4102916c06ba4f6eba57 WatchSource:0}: Error finding container 8d76d6b4b022ef172cca45f74cbdd3bc5b72e80ba3ba4102916c06ba4f6eba57: Status 404 returned error can't find the container with id 8d76d6b4b022ef172cca45f74cbdd3bc5b72e80ba3ba4102916c06ba4f6eba57 Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.686012 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-oauth-serving-cert\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.686061 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-trusted-ca-bundle\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.686085 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-console-config\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.686108 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-service-ca\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.686127 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jnlm\" (UniqueName: \"kubernetes.io/projected/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-kube-api-access-6jnlm\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.686156 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-console-oauth-config\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.686173 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-console-serving-cert\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.769884 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.787508 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-oauth-serving-cert\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.787558 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-trusted-ca-bundle\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.787579 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-console-config\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.787607 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-service-ca\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.787627 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jnlm\" (UniqueName: \"kubernetes.io/projected/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-kube-api-access-6jnlm\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.787693 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-console-oauth-config\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.787715 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-console-serving-cert\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.788733 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-oauth-serving-cert\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.788943 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-service-ca\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.789134 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-console-config\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.790444 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-trusted-ca-bundle\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.791873 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-console-serving-cert\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.792538 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-console-oauth-config\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.806111 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jnlm\" (UniqueName: \"kubernetes.io/projected/b2b8847c-6a1a-4e2d-bda5-34637123a1b2-kube-api-access-6jnlm\") pod \"console-c76489864-9qkdn\" (UID: \"b2b8847c-6a1a-4e2d-bda5-34637123a1b2\") " pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.843624 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-zht2b"] Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.889201 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws"] Nov 25 19:46:24 crc kubenswrapper[4775]: W1125 19:46:24.897776 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5dfbfdb_3ed8_442b_82c4_4cb389e18670.slice/crio-172450d9eed994faeff093b2c5b69819467bd2b7a0b035de41458354846a8bbb WatchSource:0}: Error finding container 172450d9eed994faeff093b2c5b69819467bd2b7a0b035de41458354846a8bbb: Status 404 returned error can't find the container with id 172450d9eed994faeff093b2c5b69819467bd2b7a0b035de41458354846a8bbb Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.964232 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:24 crc kubenswrapper[4775]: I1125 19:46:24.983930 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6"] Nov 25 19:46:24 crc kubenswrapper[4775]: W1125 19:46:24.989663 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb486f12c_2fa3_4826_a246_2f805253df99.slice/crio-05a6a2e4cdfb95ebea0ccbe7a303e4ff96b1482b10c9d2c8b7e874ba4d77c6f8 WatchSource:0}: Error finding container 05a6a2e4cdfb95ebea0ccbe7a303e4ff96b1482b10c9d2c8b7e874ba4d77c6f8: Status 404 returned error can't find the container with id 05a6a2e4cdfb95ebea0ccbe7a303e4ff96b1482b10c9d2c8b7e874ba4d77c6f8 Nov 25 19:46:25 crc kubenswrapper[4775]: I1125 19:46:25.157336 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c76489864-9qkdn"] Nov 25 19:46:25 crc kubenswrapper[4775]: I1125 19:46:25.435455 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c76489864-9qkdn" event={"ID":"b2b8847c-6a1a-4e2d-bda5-34637123a1b2","Type":"ContainerStarted","Data":"a3f9aa63a979caabea94f5a6ca68fff51e05da0159401cadbe4bc69b527a5226"} Nov 25 19:46:25 crc kubenswrapper[4775]: I1125 19:46:25.435556 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c76489864-9qkdn" event={"ID":"b2b8847c-6a1a-4e2d-bda5-34637123a1b2","Type":"ContainerStarted","Data":"a905cb4092c7a21db17e4962f42ed2dfc32a5f1e4675b7d7ffe1ed1a1f164008"} Nov 25 19:46:25 crc kubenswrapper[4775]: I1125 19:46:25.437595 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" event={"ID":"b486f12c-2fa3-4826-a246-2f805253df99","Type":"ContainerStarted","Data":"05a6a2e4cdfb95ebea0ccbe7a303e4ff96b1482b10c9d2c8b7e874ba4d77c6f8"} Nov 25 19:46:25 crc kubenswrapper[4775]: I1125 19:46:25.439717 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zht2b" event={"ID":"7c6692fe-ee5b-431f-ab39-cb684d304bc1","Type":"ContainerStarted","Data":"18ea0b6f93c4f1f19b57239dc6887b673790f9927b227b243b92aef68409bd53"} Nov 25 19:46:25 crc kubenswrapper[4775]: I1125 19:46:25.443177 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws" event={"ID":"b5dfbfdb-3ed8-442b-82c4-4cb389e18670","Type":"ContainerStarted","Data":"172450d9eed994faeff093b2c5b69819467bd2b7a0b035de41458354846a8bbb"} Nov 25 19:46:25 crc kubenswrapper[4775]: I1125 19:46:25.445424 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-glq7r" event={"ID":"c972f926-912d-49e8-8533-10045e2263da","Type":"ContainerStarted","Data":"8d76d6b4b022ef172cca45f74cbdd3bc5b72e80ba3ba4102916c06ba4f6eba57"} Nov 25 19:46:25 crc kubenswrapper[4775]: I1125 19:46:25.462619 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-c76489864-9qkdn" podStartSLOduration=1.462590171 podStartE2EDuration="1.462590171s" podCreationTimestamp="2025-11-25 19:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:46:25.459113542 +0000 UTC m=+767.375475948" watchObservedRunningTime="2025-11-25 19:46:25.462590171 +0000 UTC m=+767.378952577" Nov 25 19:46:28 crc kubenswrapper[4775]: I1125 19:46:28.472484 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-glq7r" event={"ID":"c972f926-912d-49e8-8533-10045e2263da","Type":"ContainerStarted","Data":"3e41ca52b886949d492072895d162041f5de19b97682fd79f21189dfc008fb07"} Nov 25 19:46:28 crc kubenswrapper[4775]: I1125 19:46:28.472974 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:28 crc kubenswrapper[4775]: I1125 19:46:28.477564 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" event={"ID":"b486f12c-2fa3-4826-a246-2f805253df99","Type":"ContainerStarted","Data":"a6587cc8017c6cfea3d9544c94eb5ae8119886b7ce5515cb19d5eedc3d9e8116"} Nov 25 19:46:28 crc kubenswrapper[4775]: I1125 19:46:28.482133 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zht2b" event={"ID":"7c6692fe-ee5b-431f-ab39-cb684d304bc1","Type":"ContainerStarted","Data":"5d9ac49d8d35158da17403be3f58bfac0337b59d8402b448daeab0dac058fd59"} Nov 25 19:46:28 crc kubenswrapper[4775]: I1125 19:46:28.490705 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws" event={"ID":"b5dfbfdb-3ed8-442b-82c4-4cb389e18670","Type":"ContainerStarted","Data":"0d10abbd5fedb823c7089f2c61d4258cff6e084e93b9912d04476bc49d0e035d"} Nov 25 19:46:28 crc kubenswrapper[4775]: I1125 19:46:28.491870 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws" Nov 25 19:46:28 crc kubenswrapper[4775]: I1125 19:46:28.502378 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-glq7r" podStartSLOduration=1.585695563 podStartE2EDuration="4.502356979s" podCreationTimestamp="2025-11-25 19:46:24 +0000 UTC" firstStartedPulling="2025-11-25 19:46:24.676878437 +0000 UTC m=+766.593240803" lastFinishedPulling="2025-11-25 19:46:27.593539853 +0000 UTC m=+769.509902219" observedRunningTime="2025-11-25 19:46:28.494333713 +0000 UTC m=+770.410696149" watchObservedRunningTime="2025-11-25 19:46:28.502356979 +0000 UTC m=+770.418719355" Nov 25 19:46:28 crc kubenswrapper[4775]: I1125 19:46:28.522200 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-bz2l6" podStartSLOduration=1.918636725 podStartE2EDuration="4.522169536s" podCreationTimestamp="2025-11-25 19:46:24 +0000 UTC" firstStartedPulling="2025-11-25 19:46:24.992170128 +0000 UTC m=+766.908532494" lastFinishedPulling="2025-11-25 19:46:27.595702899 +0000 UTC m=+769.512065305" observedRunningTime="2025-11-25 19:46:28.518858211 +0000 UTC m=+770.435220647" watchObservedRunningTime="2025-11-25 19:46:28.522169536 +0000 UTC m=+770.438531932" Nov 25 19:46:28 crc kubenswrapper[4775]: I1125 19:46:28.547573 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws" podStartSLOduration=1.830886349 podStartE2EDuration="4.547548265s" podCreationTimestamp="2025-11-25 19:46:24 +0000 UTC" firstStartedPulling="2025-11-25 19:46:24.900300447 +0000 UTC m=+766.816662813" lastFinishedPulling="2025-11-25 19:46:27.616962333 +0000 UTC m=+769.533324729" observedRunningTime="2025-11-25 19:46:28.544778896 +0000 UTC m=+770.461141322" watchObservedRunningTime="2025-11-25 19:46:28.547548265 +0000 UTC m=+770.463910691" Nov 25 19:46:30 crc kubenswrapper[4775]: I1125 19:46:30.507431 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zht2b" event={"ID":"7c6692fe-ee5b-431f-ab39-cb684d304bc1","Type":"ContainerStarted","Data":"a34c41b2327b0abea070b76e2a3fd498b1608af51993756e720e2ff461b0eb45"} Nov 25 19:46:34 crc kubenswrapper[4775]: I1125 19:46:34.689775 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-glq7r" Nov 25 19:46:34 crc kubenswrapper[4775]: I1125 19:46:34.729628 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-zht2b" podStartSLOduration=5.6001203109999995 podStartE2EDuration="10.729594715s" podCreationTimestamp="2025-11-25 19:46:24 +0000 UTC" firstStartedPulling="2025-11-25 19:46:24.873789438 +0000 UTC m=+766.790151804" lastFinishedPulling="2025-11-25 19:46:30.003263842 +0000 UTC m=+771.919626208" observedRunningTime="2025-11-25 19:46:30.53953219 +0000 UTC m=+772.455894586" watchObservedRunningTime="2025-11-25 19:46:34.729594715 +0000 UTC m=+776.645957111" Nov 25 19:46:34 crc kubenswrapper[4775]: I1125 19:46:34.964940 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:34 crc kubenswrapper[4775]: I1125 19:46:34.965036 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:34 crc kubenswrapper[4775]: I1125 19:46:34.976177 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:35 crc kubenswrapper[4775]: I1125 19:46:35.552425 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-c76489864-9qkdn" Nov 25 19:46:35 crc kubenswrapper[4775]: I1125 19:46:35.617747 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2c2hp"] Nov 25 19:46:41 crc kubenswrapper[4775]: I1125 19:46:41.070967 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:46:41 crc kubenswrapper[4775]: I1125 19:46:41.071976 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:46:41 crc kubenswrapper[4775]: I1125 19:46:41.072047 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:46:41 crc kubenswrapper[4775]: I1125 19:46:41.072855 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9e60c7320dcbc3b2c5ac1396fe8089095784ebc9e95a14db7f39bea21a7ea59"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 19:46:41 crc kubenswrapper[4775]: I1125 19:46:41.072959 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://f9e60c7320dcbc3b2c5ac1396fe8089095784ebc9e95a14db7f39bea21a7ea59" gracePeriod=600 Nov 25 19:46:41 crc kubenswrapper[4775]: I1125 19:46:41.597945 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="f9e60c7320dcbc3b2c5ac1396fe8089095784ebc9e95a14db7f39bea21a7ea59" exitCode=0 Nov 25 19:46:41 crc kubenswrapper[4775]: I1125 19:46:41.598062 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"f9e60c7320dcbc3b2c5ac1396fe8089095784ebc9e95a14db7f39bea21a7ea59"} Nov 25 19:46:41 crc kubenswrapper[4775]: I1125 19:46:41.598508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"7b6dd9da01186a3ce4b866b2112d5b02fb5c358a2952aa59bf01efd8cd71d7aa"} Nov 25 19:46:41 crc kubenswrapper[4775]: I1125 19:46:41.598570 4775 scope.go:117] "RemoveContainer" containerID="814f270a52200f75169128bcfe904e73985125f44369ce9e0392e2533ead19f8" Nov 25 19:46:44 crc kubenswrapper[4775]: I1125 19:46:44.655984 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5zws" Nov 25 19:47:00 crc kubenswrapper[4775]: I1125 19:47:00.677329 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-2c2hp" podUID="f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" containerName="console" containerID="cri-o://cf54962a282d23b71f7663b6b0f1009b38191c874b97e535e74c06292ddf5ca5" gracePeriod=15 Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.154862 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2c2hp_f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3/console/0.log" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.154930 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.182550 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-trusted-ca-bundle\") pod \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.182615 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-oauth-serving-cert\") pod \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.182634 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8llt\" (UniqueName: \"kubernetes.io/projected/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-kube-api-access-g8llt\") pod \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.182682 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-service-ca\") pod \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.182732 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-serving-cert\") pod \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.182754 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-config\") pod \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.182774 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-oauth-config\") pod \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\" (UID: \"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3\") " Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.183352 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" (UID: "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.183367 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" (UID: "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.183629 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-config" (OuterVolumeSpecName: "console-config") pod "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" (UID: "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.183785 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-service-ca" (OuterVolumeSpecName: "service-ca") pod "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" (UID: "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.190078 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" (UID: "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.193938 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" (UID: "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.194365 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-kube-api-access-g8llt" (OuterVolumeSpecName: "kube-api-access-g8llt") pod "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" (UID: "f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3"). InnerVolumeSpecName "kube-api-access-g8llt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.284321 4775 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.284532 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8llt\" (UniqueName: \"kubernetes.io/projected/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-kube-api-access-g8llt\") on node \"crc\" DevicePath \"\"" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.284590 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.284638 4775 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.284706 4775 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.284754 4775 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.284801 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.782566 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2c2hp_f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3/console/0.log" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.782633 4775 generic.go:334] "Generic (PLEG): container finished" podID="f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" containerID="cf54962a282d23b71f7663b6b0f1009b38191c874b97e535e74c06292ddf5ca5" exitCode=2 Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.782735 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2c2hp" event={"ID":"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3","Type":"ContainerDied","Data":"cf54962a282d23b71f7663b6b0f1009b38191c874b97e535e74c06292ddf5ca5"} Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.782773 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2c2hp" event={"ID":"f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3","Type":"ContainerDied","Data":"fcb56a6a22343724451749a88e8447fdc16f3a193694c2560d6936e989f82b0f"} Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.782772 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2c2hp" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.782790 4775 scope.go:117] "RemoveContainer" containerID="cf54962a282d23b71f7663b6b0f1009b38191c874b97e535e74c06292ddf5ca5" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.807002 4775 scope.go:117] "RemoveContainer" containerID="cf54962a282d23b71f7663b6b0f1009b38191c874b97e535e74c06292ddf5ca5" Nov 25 19:47:01 crc kubenswrapper[4775]: E1125 19:47:01.807538 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf54962a282d23b71f7663b6b0f1009b38191c874b97e535e74c06292ddf5ca5\": container with ID starting with cf54962a282d23b71f7663b6b0f1009b38191c874b97e535e74c06292ddf5ca5 not found: ID does not exist" containerID="cf54962a282d23b71f7663b6b0f1009b38191c874b97e535e74c06292ddf5ca5" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.807571 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf54962a282d23b71f7663b6b0f1009b38191c874b97e535e74c06292ddf5ca5"} err="failed to get container status \"cf54962a282d23b71f7663b6b0f1009b38191c874b97e535e74c06292ddf5ca5\": rpc error: code = NotFound desc = could not find container \"cf54962a282d23b71f7663b6b0f1009b38191c874b97e535e74c06292ddf5ca5\": container with ID starting with cf54962a282d23b71f7663b6b0f1009b38191c874b97e535e74c06292ddf5ca5 not found: ID does not exist" Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.826363 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2c2hp"] Nov 25 19:47:01 crc kubenswrapper[4775]: I1125 19:47:01.830183 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-2c2hp"] Nov 25 19:47:02 crc kubenswrapper[4775]: I1125 19:47:02.861275 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" path="/var/lib/kubelet/pods/f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3/volumes" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.101609 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m"] Nov 25 19:47:03 crc kubenswrapper[4775]: E1125 19:47:03.101988 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" containerName="console" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.102017 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" containerName="console" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.102191 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89f25e8-fc62-4be1-9cb2-f9cb8b7c39b3" containerName="console" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.103463 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.107303 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.113433 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m\" (UID: \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.113954 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m\" (UID: \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.114213 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7vf6\" (UniqueName: \"kubernetes.io/projected/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-kube-api-access-j7vf6\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m\" (UID: \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.119854 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m"] Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.216370 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m\" (UID: \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.216451 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m\" (UID: \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.216510 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7vf6\" (UniqueName: \"kubernetes.io/projected/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-kube-api-access-j7vf6\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m\" (UID: \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.217048 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m\" (UID: \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.217194 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m\" (UID: \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.240590 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7vf6\" (UniqueName: \"kubernetes.io/projected/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-kube-api-access-j7vf6\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m\" (UID: \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.434602 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.707301 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m"] Nov 25 19:47:03 crc kubenswrapper[4775]: W1125 19:47:03.714021 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda7c7d85_6da9_4aa0_9b6b_89aca8bef945.slice/crio-c8942e06388e49f8237f4c61b5c6d8a9d78a36e0b011d3015eb133298e002714 WatchSource:0}: Error finding container c8942e06388e49f8237f4c61b5c6d8a9d78a36e0b011d3015eb133298e002714: Status 404 returned error can't find the container with id c8942e06388e49f8237f4c61b5c6d8a9d78a36e0b011d3015eb133298e002714 Nov 25 19:47:03 crc kubenswrapper[4775]: I1125 19:47:03.804930 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" event={"ID":"da7c7d85-6da9-4aa0-9b6b-89aca8bef945","Type":"ContainerStarted","Data":"c8942e06388e49f8237f4c61b5c6d8a9d78a36e0b011d3015eb133298e002714"} Nov 25 19:47:04 crc kubenswrapper[4775]: I1125 19:47:04.815274 4775 generic.go:334] "Generic (PLEG): container finished" podID="da7c7d85-6da9-4aa0-9b6b-89aca8bef945" containerID="4c40f6e8bb519f1061e999eb63977d47c3c8960e47934f1232d4a9f79c7029a1" exitCode=0 Nov 25 19:47:04 crc kubenswrapper[4775]: I1125 19:47:04.815413 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" event={"ID":"da7c7d85-6da9-4aa0-9b6b-89aca8bef945","Type":"ContainerDied","Data":"4c40f6e8bb519f1061e999eb63977d47c3c8960e47934f1232d4a9f79c7029a1"} Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.440642 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g8zf9"] Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.442817 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.458372 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g8zf9"] Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.468192 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-utilities\") pod \"redhat-operators-g8zf9\" (UID: \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\") " pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.468259 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tqh\" (UniqueName: \"kubernetes.io/projected/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-kube-api-access-66tqh\") pod \"redhat-operators-g8zf9\" (UID: \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\") " pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.468446 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-catalog-content\") pod \"redhat-operators-g8zf9\" (UID: \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\") " pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.570316 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-catalog-content\") pod \"redhat-operators-g8zf9\" (UID: \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\") " pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.570400 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-utilities\") pod \"redhat-operators-g8zf9\" (UID: \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\") " pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.570450 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tqh\" (UniqueName: \"kubernetes.io/projected/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-kube-api-access-66tqh\") pod \"redhat-operators-g8zf9\" (UID: \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\") " pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.571145 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-utilities\") pod \"redhat-operators-g8zf9\" (UID: \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\") " pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.571452 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-catalog-content\") pod \"redhat-operators-g8zf9\" (UID: \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\") " pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.595439 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tqh\" (UniqueName: \"kubernetes.io/projected/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-kube-api-access-66tqh\") pod \"redhat-operators-g8zf9\" (UID: \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\") " pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.776497 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.842568 4775 generic.go:334] "Generic (PLEG): container finished" podID="da7c7d85-6da9-4aa0-9b6b-89aca8bef945" containerID="b6fb0f6a3814316716e67a653e8579931e9c7b5124771228de077f1d7dd224f8" exitCode=0 Nov 25 19:47:06 crc kubenswrapper[4775]: I1125 19:47:06.842659 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" event={"ID":"da7c7d85-6da9-4aa0-9b6b-89aca8bef945","Type":"ContainerDied","Data":"b6fb0f6a3814316716e67a653e8579931e9c7b5124771228de077f1d7dd224f8"} Nov 25 19:47:07 crc kubenswrapper[4775]: I1125 19:47:07.264880 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g8zf9"] Nov 25 19:47:07 crc kubenswrapper[4775]: W1125 19:47:07.269628 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf2e7fe3_520e_40ec_876f_c0b032c92e8d.slice/crio-c852f84ea6ec95efc2b51c30fdfc4625027225f22cfee6986a1c527a8d2df4ad WatchSource:0}: Error finding container c852f84ea6ec95efc2b51c30fdfc4625027225f22cfee6986a1c527a8d2df4ad: Status 404 returned error can't find the container with id c852f84ea6ec95efc2b51c30fdfc4625027225f22cfee6986a1c527a8d2df4ad Nov 25 19:47:07 crc kubenswrapper[4775]: I1125 19:47:07.851900 4775 generic.go:334] "Generic (PLEG): container finished" podID="da7c7d85-6da9-4aa0-9b6b-89aca8bef945" containerID="2096455c4ce381dcca9227c7b11a4921ad78273853a966ccc90b4d854ac6b58b" exitCode=0 Nov 25 19:47:07 crc kubenswrapper[4775]: I1125 19:47:07.851957 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" event={"ID":"da7c7d85-6da9-4aa0-9b6b-89aca8bef945","Type":"ContainerDied","Data":"2096455c4ce381dcca9227c7b11a4921ad78273853a966ccc90b4d854ac6b58b"} Nov 25 19:47:07 crc kubenswrapper[4775]: I1125 19:47:07.854347 4775 generic.go:334] "Generic (PLEG): container finished" podID="cf2e7fe3-520e-40ec-876f-c0b032c92e8d" containerID="3f6f0b84349612e834afd3a407eb7ed529548f4049dc9934a0785dae52aa3c88" exitCode=0 Nov 25 19:47:07 crc kubenswrapper[4775]: I1125 19:47:07.854387 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8zf9" event={"ID":"cf2e7fe3-520e-40ec-876f-c0b032c92e8d","Type":"ContainerDied","Data":"3f6f0b84349612e834afd3a407eb7ed529548f4049dc9934a0785dae52aa3c88"} Nov 25 19:47:07 crc kubenswrapper[4775]: I1125 19:47:07.854410 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8zf9" event={"ID":"cf2e7fe3-520e-40ec-876f-c0b032c92e8d","Type":"ContainerStarted","Data":"c852f84ea6ec95efc2b51c30fdfc4625027225f22cfee6986a1c527a8d2df4ad"} Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.181697 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.207603 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7vf6\" (UniqueName: \"kubernetes.io/projected/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-kube-api-access-j7vf6\") pod \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\" (UID: \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\") " Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.207676 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-util\") pod \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\" (UID: \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\") " Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.207754 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-bundle\") pod \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\" (UID: \"da7c7d85-6da9-4aa0-9b6b-89aca8bef945\") " Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.209349 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-bundle" (OuterVolumeSpecName: "bundle") pod "da7c7d85-6da9-4aa0-9b6b-89aca8bef945" (UID: "da7c7d85-6da9-4aa0-9b6b-89aca8bef945"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.215417 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-kube-api-access-j7vf6" (OuterVolumeSpecName: "kube-api-access-j7vf6") pod "da7c7d85-6da9-4aa0-9b6b-89aca8bef945" (UID: "da7c7d85-6da9-4aa0-9b6b-89aca8bef945"). InnerVolumeSpecName "kube-api-access-j7vf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.309244 4775 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.309286 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7vf6\" (UniqueName: \"kubernetes.io/projected/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-kube-api-access-j7vf6\") on node \"crc\" DevicePath \"\"" Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.340275 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-util" (OuterVolumeSpecName: "util") pod "da7c7d85-6da9-4aa0-9b6b-89aca8bef945" (UID: "da7c7d85-6da9-4aa0-9b6b-89aca8bef945"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.410922 4775 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da7c7d85-6da9-4aa0-9b6b-89aca8bef945-util\") on node \"crc\" DevicePath \"\"" Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.883116 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.883128 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m" event={"ID":"da7c7d85-6da9-4aa0-9b6b-89aca8bef945","Type":"ContainerDied","Data":"c8942e06388e49f8237f4c61b5c6d8a9d78a36e0b011d3015eb133298e002714"} Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.883499 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8942e06388e49f8237f4c61b5c6d8a9d78a36e0b011d3015eb133298e002714" Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.886431 4775 generic.go:334] "Generic (PLEG): container finished" podID="cf2e7fe3-520e-40ec-876f-c0b032c92e8d" containerID="c8ab3245c4c562aad90f627bcf06d45a56342e6d139c5240b6a589db7d69803f" exitCode=0 Nov 25 19:47:09 crc kubenswrapper[4775]: I1125 19:47:09.886499 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8zf9" event={"ID":"cf2e7fe3-520e-40ec-876f-c0b032c92e8d","Type":"ContainerDied","Data":"c8ab3245c4c562aad90f627bcf06d45a56342e6d139c5240b6a589db7d69803f"} Nov 25 19:47:10 crc kubenswrapper[4775]: I1125 19:47:10.898256 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8zf9" event={"ID":"cf2e7fe3-520e-40ec-876f-c0b032c92e8d","Type":"ContainerStarted","Data":"4006c9107432b4bb80f523cd178189357d291a33da00834d259b8adf33ef0eaa"} Nov 25 19:47:10 crc kubenswrapper[4775]: I1125 19:47:10.926989 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g8zf9" podStartSLOduration=2.4989583140000002 podStartE2EDuration="4.926970076s" podCreationTimestamp="2025-11-25 19:47:06 +0000 UTC" firstStartedPulling="2025-11-25 19:47:07.856315927 +0000 UTC m=+809.772678333" lastFinishedPulling="2025-11-25 19:47:10.284327719 +0000 UTC m=+812.200690095" observedRunningTime="2025-11-25 19:47:10.925097865 +0000 UTC m=+812.841460231" watchObservedRunningTime="2025-11-25 19:47:10.926970076 +0000 UTC m=+812.843332452" Nov 25 19:47:16 crc kubenswrapper[4775]: I1125 19:47:16.776939 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:16 crc kubenswrapper[4775]: I1125 19:47:16.777695 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:16 crc kubenswrapper[4775]: I1125 19:47:16.860202 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:17 crc kubenswrapper[4775]: I1125 19:47:17.021918 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.601707 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp"] Nov 25 19:47:18 crc kubenswrapper[4775]: E1125 19:47:18.601933 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7c7d85-6da9-4aa0-9b6b-89aca8bef945" containerName="util" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.601945 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7c7d85-6da9-4aa0-9b6b-89aca8bef945" containerName="util" Nov 25 19:47:18 crc kubenswrapper[4775]: E1125 19:47:18.601951 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7c7d85-6da9-4aa0-9b6b-89aca8bef945" containerName="pull" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.601957 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7c7d85-6da9-4aa0-9b6b-89aca8bef945" containerName="pull" Nov 25 19:47:18 crc kubenswrapper[4775]: E1125 19:47:18.601970 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7c7d85-6da9-4aa0-9b6b-89aca8bef945" containerName="extract" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.601976 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7c7d85-6da9-4aa0-9b6b-89aca8bef945" containerName="extract" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.602073 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="da7c7d85-6da9-4aa0-9b6b-89aca8bef945" containerName="extract" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.602416 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.607463 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.607549 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.607565 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.607684 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.608419 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-m5p9b" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.618085 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp"] Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.738981 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz8t8\" (UniqueName: \"kubernetes.io/projected/8f53c019-29df-4614-a285-cc2b88dba2ba-kube-api-access-bz8t8\") pod \"metallb-operator-controller-manager-64cc678d47-lk7dp\" (UID: \"8f53c019-29df-4614-a285-cc2b88dba2ba\") " pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.739030 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f53c019-29df-4614-a285-cc2b88dba2ba-apiservice-cert\") pod \"metallb-operator-controller-manager-64cc678d47-lk7dp\" (UID: \"8f53c019-29df-4614-a285-cc2b88dba2ba\") " pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.739065 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f53c019-29df-4614-a285-cc2b88dba2ba-webhook-cert\") pod \"metallb-operator-controller-manager-64cc678d47-lk7dp\" (UID: \"8f53c019-29df-4614-a285-cc2b88dba2ba\") " pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.839573 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz8t8\" (UniqueName: \"kubernetes.io/projected/8f53c019-29df-4614-a285-cc2b88dba2ba-kube-api-access-bz8t8\") pod \"metallb-operator-controller-manager-64cc678d47-lk7dp\" (UID: \"8f53c019-29df-4614-a285-cc2b88dba2ba\") " pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.839835 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f53c019-29df-4614-a285-cc2b88dba2ba-apiservice-cert\") pod \"metallb-operator-controller-manager-64cc678d47-lk7dp\" (UID: \"8f53c019-29df-4614-a285-cc2b88dba2ba\") " pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.839923 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f53c019-29df-4614-a285-cc2b88dba2ba-webhook-cert\") pod \"metallb-operator-controller-manager-64cc678d47-lk7dp\" (UID: \"8f53c019-29df-4614-a285-cc2b88dba2ba\") " pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.846951 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f53c019-29df-4614-a285-cc2b88dba2ba-webhook-cert\") pod \"metallb-operator-controller-manager-64cc678d47-lk7dp\" (UID: \"8f53c019-29df-4614-a285-cc2b88dba2ba\") " pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.852640 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f53c019-29df-4614-a285-cc2b88dba2ba-apiservice-cert\") pod \"metallb-operator-controller-manager-64cc678d47-lk7dp\" (UID: \"8f53c019-29df-4614-a285-cc2b88dba2ba\") " pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.857639 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q"] Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.858349 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.860946 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-gdqbq" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.861918 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.861929 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.868022 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz8t8\" (UniqueName: \"kubernetes.io/projected/8f53c019-29df-4614-a285-cc2b88dba2ba-kube-api-access-bz8t8\") pod \"metallb-operator-controller-manager-64cc678d47-lk7dp\" (UID: \"8f53c019-29df-4614-a285-cc2b88dba2ba\") " pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.877398 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q"] Nov 25 19:47:18 crc kubenswrapper[4775]: I1125 19:47:18.917313 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.044738 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1efb9150-f88c-4d86-a034-e49f9576f96a-apiservice-cert\") pod \"metallb-operator-webhook-server-747fc6cfc5-9qp9q\" (UID: \"1efb9150-f88c-4d86-a034-e49f9576f96a\") " pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.045097 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt866\" (UniqueName: \"kubernetes.io/projected/1efb9150-f88c-4d86-a034-e49f9576f96a-kube-api-access-kt866\") pod \"metallb-operator-webhook-server-747fc6cfc5-9qp9q\" (UID: \"1efb9150-f88c-4d86-a034-e49f9576f96a\") " pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.045181 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1efb9150-f88c-4d86-a034-e49f9576f96a-webhook-cert\") pod \"metallb-operator-webhook-server-747fc6cfc5-9qp9q\" (UID: \"1efb9150-f88c-4d86-a034-e49f9576f96a\") " pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.124713 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp"] Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.146278 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1efb9150-f88c-4d86-a034-e49f9576f96a-apiservice-cert\") pod \"metallb-operator-webhook-server-747fc6cfc5-9qp9q\" (UID: \"1efb9150-f88c-4d86-a034-e49f9576f96a\") " pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.146363 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt866\" (UniqueName: \"kubernetes.io/projected/1efb9150-f88c-4d86-a034-e49f9576f96a-kube-api-access-kt866\") pod \"metallb-operator-webhook-server-747fc6cfc5-9qp9q\" (UID: \"1efb9150-f88c-4d86-a034-e49f9576f96a\") " pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.146453 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1efb9150-f88c-4d86-a034-e49f9576f96a-webhook-cert\") pod \"metallb-operator-webhook-server-747fc6cfc5-9qp9q\" (UID: \"1efb9150-f88c-4d86-a034-e49f9576f96a\") " pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.151558 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1efb9150-f88c-4d86-a034-e49f9576f96a-webhook-cert\") pod \"metallb-operator-webhook-server-747fc6cfc5-9qp9q\" (UID: \"1efb9150-f88c-4d86-a034-e49f9576f96a\") " pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.153533 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1efb9150-f88c-4d86-a034-e49f9576f96a-apiservice-cert\") pod \"metallb-operator-webhook-server-747fc6cfc5-9qp9q\" (UID: \"1efb9150-f88c-4d86-a034-e49f9576f96a\") " pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.162229 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt866\" (UniqueName: \"kubernetes.io/projected/1efb9150-f88c-4d86-a034-e49f9576f96a-kube-api-access-kt866\") pod \"metallb-operator-webhook-server-747fc6cfc5-9qp9q\" (UID: \"1efb9150-f88c-4d86-a034-e49f9576f96a\") " pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.211088 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.627052 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q"] Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.977300 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" event={"ID":"8f53c019-29df-4614-a285-cc2b88dba2ba","Type":"ContainerStarted","Data":"6187bc05dda5dadd7717aa91f14b854a3655e60197343394c1ed7cf56d8727b2"} Nov 25 19:47:19 crc kubenswrapper[4775]: I1125 19:47:19.978974 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" event={"ID":"1efb9150-f88c-4d86-a034-e49f9576f96a","Type":"ContainerStarted","Data":"2a4e0f0265f043b6143e9333a3973996190c84328b8ef1c2ac54eaf48db4774e"} Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.027790 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g8zf9"] Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.028120 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g8zf9" podUID="cf2e7fe3-520e-40ec-876f-c0b032c92e8d" containerName="registry-server" containerID="cri-o://4006c9107432b4bb80f523cd178189357d291a33da00834d259b8adf33ef0eaa" gracePeriod=2 Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.435870 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.563280 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-catalog-content\") pod \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\" (UID: \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\") " Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.563468 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66tqh\" (UniqueName: \"kubernetes.io/projected/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-kube-api-access-66tqh\") pod \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\" (UID: \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\") " Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.563517 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-utilities\") pod \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\" (UID: \"cf2e7fe3-520e-40ec-876f-c0b032c92e8d\") " Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.564737 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-utilities" (OuterVolumeSpecName: "utilities") pod "cf2e7fe3-520e-40ec-876f-c0b032c92e8d" (UID: "cf2e7fe3-520e-40ec-876f-c0b032c92e8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.571808 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-kube-api-access-66tqh" (OuterVolumeSpecName: "kube-api-access-66tqh") pod "cf2e7fe3-520e-40ec-876f-c0b032c92e8d" (UID: "cf2e7fe3-520e-40ec-876f-c0b032c92e8d"). InnerVolumeSpecName "kube-api-access-66tqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.665741 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66tqh\" (UniqueName: \"kubernetes.io/projected/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-kube-api-access-66tqh\") on node \"crc\" DevicePath \"\"" Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.665782 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.706551 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf2e7fe3-520e-40ec-876f-c0b032c92e8d" (UID: "cf2e7fe3-520e-40ec-876f-c0b032c92e8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.770676 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2e7fe3-520e-40ec-876f-c0b032c92e8d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.988722 4775 generic.go:334] "Generic (PLEG): container finished" podID="cf2e7fe3-520e-40ec-876f-c0b032c92e8d" containerID="4006c9107432b4bb80f523cd178189357d291a33da00834d259b8adf33ef0eaa" exitCode=0 Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.988764 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8zf9" event={"ID":"cf2e7fe3-520e-40ec-876f-c0b032c92e8d","Type":"ContainerDied","Data":"4006c9107432b4bb80f523cd178189357d291a33da00834d259b8adf33ef0eaa"} Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.988797 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8zf9" event={"ID":"cf2e7fe3-520e-40ec-876f-c0b032c92e8d","Type":"ContainerDied","Data":"c852f84ea6ec95efc2b51c30fdfc4625027225f22cfee6986a1c527a8d2df4ad"} Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.988816 4775 scope.go:117] "RemoveContainer" containerID="4006c9107432b4bb80f523cd178189357d291a33da00834d259b8adf33ef0eaa" Nov 25 19:47:20 crc kubenswrapper[4775]: I1125 19:47:20.988829 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8zf9" Nov 25 19:47:21 crc kubenswrapper[4775]: I1125 19:47:21.004633 4775 scope.go:117] "RemoveContainer" containerID="c8ab3245c4c562aad90f627bcf06d45a56342e6d139c5240b6a589db7d69803f" Nov 25 19:47:21 crc kubenswrapper[4775]: I1125 19:47:21.006099 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g8zf9"] Nov 25 19:47:21 crc kubenswrapper[4775]: I1125 19:47:21.012592 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g8zf9"] Nov 25 19:47:21 crc kubenswrapper[4775]: I1125 19:47:21.030300 4775 scope.go:117] "RemoveContainer" containerID="3f6f0b84349612e834afd3a407eb7ed529548f4049dc9934a0785dae52aa3c88" Nov 25 19:47:21 crc kubenswrapper[4775]: I1125 19:47:21.055766 4775 scope.go:117] "RemoveContainer" containerID="4006c9107432b4bb80f523cd178189357d291a33da00834d259b8adf33ef0eaa" Nov 25 19:47:21 crc kubenswrapper[4775]: E1125 19:47:21.056142 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4006c9107432b4bb80f523cd178189357d291a33da00834d259b8adf33ef0eaa\": container with ID starting with 4006c9107432b4bb80f523cd178189357d291a33da00834d259b8adf33ef0eaa not found: ID does not exist" containerID="4006c9107432b4bb80f523cd178189357d291a33da00834d259b8adf33ef0eaa" Nov 25 19:47:21 crc kubenswrapper[4775]: I1125 19:47:21.056180 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4006c9107432b4bb80f523cd178189357d291a33da00834d259b8adf33ef0eaa"} err="failed to get container status \"4006c9107432b4bb80f523cd178189357d291a33da00834d259b8adf33ef0eaa\": rpc error: code = NotFound desc = could not find container \"4006c9107432b4bb80f523cd178189357d291a33da00834d259b8adf33ef0eaa\": container with ID starting with 4006c9107432b4bb80f523cd178189357d291a33da00834d259b8adf33ef0eaa not found: ID does not exist" Nov 25 19:47:21 crc kubenswrapper[4775]: I1125 19:47:21.056204 4775 scope.go:117] "RemoveContainer" containerID="c8ab3245c4c562aad90f627bcf06d45a56342e6d139c5240b6a589db7d69803f" Nov 25 19:47:21 crc kubenswrapper[4775]: E1125 19:47:21.062635 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ab3245c4c562aad90f627bcf06d45a56342e6d139c5240b6a589db7d69803f\": container with ID starting with c8ab3245c4c562aad90f627bcf06d45a56342e6d139c5240b6a589db7d69803f not found: ID does not exist" containerID="c8ab3245c4c562aad90f627bcf06d45a56342e6d139c5240b6a589db7d69803f" Nov 25 19:47:21 crc kubenswrapper[4775]: I1125 19:47:21.062682 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ab3245c4c562aad90f627bcf06d45a56342e6d139c5240b6a589db7d69803f"} err="failed to get container status \"c8ab3245c4c562aad90f627bcf06d45a56342e6d139c5240b6a589db7d69803f\": rpc error: code = NotFound desc = could not find container \"c8ab3245c4c562aad90f627bcf06d45a56342e6d139c5240b6a589db7d69803f\": container with ID starting with c8ab3245c4c562aad90f627bcf06d45a56342e6d139c5240b6a589db7d69803f not found: ID does not exist" Nov 25 19:47:21 crc kubenswrapper[4775]: I1125 19:47:21.062700 4775 scope.go:117] "RemoveContainer" containerID="3f6f0b84349612e834afd3a407eb7ed529548f4049dc9934a0785dae52aa3c88" Nov 25 19:47:21 crc kubenswrapper[4775]: E1125 19:47:21.063000 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f6f0b84349612e834afd3a407eb7ed529548f4049dc9934a0785dae52aa3c88\": container with ID starting with 3f6f0b84349612e834afd3a407eb7ed529548f4049dc9934a0785dae52aa3c88 not found: ID does not exist" containerID="3f6f0b84349612e834afd3a407eb7ed529548f4049dc9934a0785dae52aa3c88" Nov 25 19:47:21 crc kubenswrapper[4775]: I1125 19:47:21.063028 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f6f0b84349612e834afd3a407eb7ed529548f4049dc9934a0785dae52aa3c88"} err="failed to get container status \"3f6f0b84349612e834afd3a407eb7ed529548f4049dc9934a0785dae52aa3c88\": rpc error: code = NotFound desc = could not find container \"3f6f0b84349612e834afd3a407eb7ed529548f4049dc9934a0785dae52aa3c88\": container with ID starting with 3f6f0b84349612e834afd3a407eb7ed529548f4049dc9934a0785dae52aa3c88 not found: ID does not exist" Nov 25 19:47:22 crc kubenswrapper[4775]: I1125 19:47:22.854037 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2e7fe3-520e-40ec-876f-c0b032c92e8d" path="/var/lib/kubelet/pods/cf2e7fe3-520e-40ec-876f-c0b032c92e8d/volumes" Nov 25 19:47:23 crc kubenswrapper[4775]: I1125 19:47:23.008154 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" event={"ID":"8f53c019-29df-4614-a285-cc2b88dba2ba","Type":"ContainerStarted","Data":"c64c7c34080da47842b7c9e2740c01e138a556cc5256e295683b96abe0083023"} Nov 25 19:47:23 crc kubenswrapper[4775]: I1125 19:47:23.008824 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" Nov 25 19:47:23 crc kubenswrapper[4775]: I1125 19:47:23.042613 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" podStartSLOduration=1.649439957 podStartE2EDuration="5.042596919s" podCreationTimestamp="2025-11-25 19:47:18 +0000 UTC" firstStartedPulling="2025-11-25 19:47:19.160874903 +0000 UTC m=+821.077237269" lastFinishedPulling="2025-11-25 19:47:22.554031865 +0000 UTC m=+824.470394231" observedRunningTime="2025-11-25 19:47:23.036776538 +0000 UTC m=+824.953138904" watchObservedRunningTime="2025-11-25 19:47:23.042596919 +0000 UTC m=+824.958959285" Nov 25 19:47:25 crc kubenswrapper[4775]: I1125 19:47:25.026856 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" event={"ID":"1efb9150-f88c-4d86-a034-e49f9576f96a","Type":"ContainerStarted","Data":"8dfad520ebee6cbbc3d534d2c4afa4aea43ffdc7798f73cedd711bb6ae21d9a9"} Nov 25 19:47:25 crc kubenswrapper[4775]: I1125 19:47:25.027684 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" Nov 25 19:47:25 crc kubenswrapper[4775]: I1125 19:47:25.055549 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" podStartSLOduration=2.292768903 podStartE2EDuration="7.055530521s" podCreationTimestamp="2025-11-25 19:47:18 +0000 UTC" firstStartedPulling="2025-11-25 19:47:19.640690217 +0000 UTC m=+821.557052583" lastFinishedPulling="2025-11-25 19:47:24.403451815 +0000 UTC m=+826.319814201" observedRunningTime="2025-11-25 19:47:25.050930955 +0000 UTC m=+826.967293331" watchObservedRunningTime="2025-11-25 19:47:25.055530521 +0000 UTC m=+826.971892887" Nov 25 19:47:39 crc kubenswrapper[4775]: I1125 19:47:39.219369 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-747fc6cfc5-9qp9q" Nov 25 19:47:58 crc kubenswrapper[4775]: I1125 19:47:58.921774 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-64cc678d47-lk7dp" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.727055 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb"] Nov 25 19:47:59 crc kubenswrapper[4775]: E1125 19:47:59.727542 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2e7fe3-520e-40ec-876f-c0b032c92e8d" containerName="extract-content" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.727556 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2e7fe3-520e-40ec-876f-c0b032c92e8d" containerName="extract-content" Nov 25 19:47:59 crc kubenswrapper[4775]: E1125 19:47:59.727571 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2e7fe3-520e-40ec-876f-c0b032c92e8d" containerName="registry-server" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.727579 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2e7fe3-520e-40ec-876f-c0b032c92e8d" containerName="registry-server" Nov 25 19:47:59 crc kubenswrapper[4775]: E1125 19:47:59.727604 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2e7fe3-520e-40ec-876f-c0b032c92e8d" containerName="extract-utilities" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.727613 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2e7fe3-520e-40ec-876f-c0b032c92e8d" containerName="extract-utilities" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.727756 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf2e7fe3-520e-40ec-876f-c0b032c92e8d" containerName="registry-server" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.728262 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.734155 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-b9hkl"] Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.734342 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.734373 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5dtsp" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.738331 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.744335 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb"] Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.747038 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.747135 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.813068 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7p6bq"] Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.816501 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7p6bq" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.820777 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.821019 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-f9wnc" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.821144 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.821909 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.828575 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d8756c7-051e-4ab6-bd7b-32a5f2646497-metrics-certs\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.828823 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e289e852-bef3-4376-8a15-b94339b1a3a3-metallb-excludel2\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.828930 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e289e852-bef3-4376-8a15-b94339b1a3a3-metrics-certs\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.829031 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0d8756c7-051e-4ab6-bd7b-32a5f2646497-reloader\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.829131 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzj5k\" (UniqueName: \"kubernetes.io/projected/9c2fb5c5-5143-45f7-bcef-e6374fb45624-kube-api-access-pzj5k\") pod \"frr-k8s-webhook-server-6998585d5-ngmrb\" (UID: \"9c2fb5c5-5143-45f7-bcef-e6374fb45624\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.829236 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e289e852-bef3-4376-8a15-b94339b1a3a3-memberlist\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.829323 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0d8756c7-051e-4ab6-bd7b-32a5f2646497-frr-sockets\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.829434 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx26w\" (UniqueName: \"kubernetes.io/projected/0d8756c7-051e-4ab6-bd7b-32a5f2646497-kube-api-access-wx26w\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.829531 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx4vg\" (UniqueName: \"kubernetes.io/projected/e289e852-bef3-4376-8a15-b94339b1a3a3-kube-api-access-mx4vg\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.829640 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0d8756c7-051e-4ab6-bd7b-32a5f2646497-frr-startup\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.828748 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-dddw7"] Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.829859 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0d8756c7-051e-4ab6-bd7b-32a5f2646497-frr-conf\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.830110 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0d8756c7-051e-4ab6-bd7b-32a5f2646497-metrics\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.830170 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9c2fb5c5-5143-45f7-bcef-e6374fb45624-cert\") pod \"frr-k8s-webhook-server-6998585d5-ngmrb\" (UID: \"9c2fb5c5-5143-45f7-bcef-e6374fb45624\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.831197 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-dddw7" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.836133 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.839393 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-dddw7"] Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931135 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx4vg\" (UniqueName: \"kubernetes.io/projected/e289e852-bef3-4376-8a15-b94339b1a3a3-kube-api-access-mx4vg\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931178 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx26w\" (UniqueName: \"kubernetes.io/projected/0d8756c7-051e-4ab6-bd7b-32a5f2646497-kube-api-access-wx26w\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931203 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0d8756c7-051e-4ab6-bd7b-32a5f2646497-frr-startup\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931223 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0d8756c7-051e-4ab6-bd7b-32a5f2646497-frr-conf\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931291 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0d8756c7-051e-4ab6-bd7b-32a5f2646497-metrics\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931334 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9c2fb5c5-5143-45f7-bcef-e6374fb45624-cert\") pod \"frr-k8s-webhook-server-6998585d5-ngmrb\" (UID: \"9c2fb5c5-5143-45f7-bcef-e6374fb45624\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931357 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d8756c7-051e-4ab6-bd7b-32a5f2646497-metrics-certs\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931395 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e289e852-bef3-4376-8a15-b94339b1a3a3-metallb-excludel2\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931422 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e289e852-bef3-4376-8a15-b94339b1a3a3-metrics-certs\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931451 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52e5e476-3337-4715-9e67-b7230874d2d4-cert\") pod \"controller-6c7b4b5f48-dddw7\" (UID: \"52e5e476-3337-4715-9e67-b7230874d2d4\") " pod="metallb-system/controller-6c7b4b5f48-dddw7" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931473 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0d8756c7-051e-4ab6-bd7b-32a5f2646497-reloader\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931497 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl6dm\" (UniqueName: \"kubernetes.io/projected/52e5e476-3337-4715-9e67-b7230874d2d4-kube-api-access-dl6dm\") pod \"controller-6c7b4b5f48-dddw7\" (UID: \"52e5e476-3337-4715-9e67-b7230874d2d4\") " pod="metallb-system/controller-6c7b4b5f48-dddw7" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931520 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzj5k\" (UniqueName: \"kubernetes.io/projected/9c2fb5c5-5143-45f7-bcef-e6374fb45624-kube-api-access-pzj5k\") pod \"frr-k8s-webhook-server-6998585d5-ngmrb\" (UID: \"9c2fb5c5-5143-45f7-bcef-e6374fb45624\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931549 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e289e852-bef3-4376-8a15-b94339b1a3a3-memberlist\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931567 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0d8756c7-051e-4ab6-bd7b-32a5f2646497-frr-sockets\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.931611 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52e5e476-3337-4715-9e67-b7230874d2d4-metrics-certs\") pod \"controller-6c7b4b5f48-dddw7\" (UID: \"52e5e476-3337-4715-9e67-b7230874d2d4\") " pod="metallb-system/controller-6c7b4b5f48-dddw7" Nov 25 19:47:59 crc kubenswrapper[4775]: E1125 19:47:59.933189 4775 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 19:47:59 crc kubenswrapper[4775]: E1125 19:47:59.933286 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e289e852-bef3-4376-8a15-b94339b1a3a3-memberlist podName:e289e852-bef3-4376-8a15-b94339b1a3a3 nodeName:}" failed. No retries permitted until 2025-11-25 19:48:00.433264612 +0000 UTC m=+862.349626988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e289e852-bef3-4376-8a15-b94339b1a3a3-memberlist") pod "speaker-7p6bq" (UID: "e289e852-bef3-4376-8a15-b94339b1a3a3") : secret "metallb-memberlist" not found Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.933534 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0d8756c7-051e-4ab6-bd7b-32a5f2646497-frr-startup\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.933599 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0d8756c7-051e-4ab6-bd7b-32a5f2646497-metrics\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.933827 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0d8756c7-051e-4ab6-bd7b-32a5f2646497-frr-sockets\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.934116 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0d8756c7-051e-4ab6-bd7b-32a5f2646497-frr-conf\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.934191 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e289e852-bef3-4376-8a15-b94339b1a3a3-metallb-excludel2\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.934197 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0d8756c7-051e-4ab6-bd7b-32a5f2646497-reloader\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.938827 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e289e852-bef3-4376-8a15-b94339b1a3a3-metrics-certs\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.942131 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d8756c7-051e-4ab6-bd7b-32a5f2646497-metrics-certs\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.953156 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9c2fb5c5-5143-45f7-bcef-e6374fb45624-cert\") pod \"frr-k8s-webhook-server-6998585d5-ngmrb\" (UID: \"9c2fb5c5-5143-45f7-bcef-e6374fb45624\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.954424 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx26w\" (UniqueName: \"kubernetes.io/projected/0d8756c7-051e-4ab6-bd7b-32a5f2646497-kube-api-access-wx26w\") pod \"frr-k8s-b9hkl\" (UID: \"0d8756c7-051e-4ab6-bd7b-32a5f2646497\") " pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.963567 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx4vg\" (UniqueName: \"kubernetes.io/projected/e289e852-bef3-4376-8a15-b94339b1a3a3-kube-api-access-mx4vg\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:47:59 crc kubenswrapper[4775]: I1125 19:47:59.967722 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzj5k\" (UniqueName: \"kubernetes.io/projected/9c2fb5c5-5143-45f7-bcef-e6374fb45624-kube-api-access-pzj5k\") pod \"frr-k8s-webhook-server-6998585d5-ngmrb\" (UID: \"9c2fb5c5-5143-45f7-bcef-e6374fb45624\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb" Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.033623 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52e5e476-3337-4715-9e67-b7230874d2d4-cert\") pod \"controller-6c7b4b5f48-dddw7\" (UID: \"52e5e476-3337-4715-9e67-b7230874d2d4\") " pod="metallb-system/controller-6c7b4b5f48-dddw7" Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.033727 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl6dm\" (UniqueName: \"kubernetes.io/projected/52e5e476-3337-4715-9e67-b7230874d2d4-kube-api-access-dl6dm\") pod \"controller-6c7b4b5f48-dddw7\" (UID: \"52e5e476-3337-4715-9e67-b7230874d2d4\") " pod="metallb-system/controller-6c7b4b5f48-dddw7" Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.033816 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52e5e476-3337-4715-9e67-b7230874d2d4-metrics-certs\") pod \"controller-6c7b4b5f48-dddw7\" (UID: \"52e5e476-3337-4715-9e67-b7230874d2d4\") " pod="metallb-system/controller-6c7b4b5f48-dddw7" Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.035727 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.051247 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52e5e476-3337-4715-9e67-b7230874d2d4-metrics-certs\") pod \"controller-6c7b4b5f48-dddw7\" (UID: \"52e5e476-3337-4715-9e67-b7230874d2d4\") " pod="metallb-system/controller-6c7b4b5f48-dddw7" Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.051356 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52e5e476-3337-4715-9e67-b7230874d2d4-cert\") pod \"controller-6c7b4b5f48-dddw7\" (UID: \"52e5e476-3337-4715-9e67-b7230874d2d4\") " pod="metallb-system/controller-6c7b4b5f48-dddw7" Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.057101 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb" Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.064951 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.071392 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl6dm\" (UniqueName: \"kubernetes.io/projected/52e5e476-3337-4715-9e67-b7230874d2d4-kube-api-access-dl6dm\") pod \"controller-6c7b4b5f48-dddw7\" (UID: \"52e5e476-3337-4715-9e67-b7230874d2d4\") " pod="metallb-system/controller-6c7b4b5f48-dddw7" Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.146812 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-dddw7" Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.272289 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b9hkl" event={"ID":"0d8756c7-051e-4ab6-bd7b-32a5f2646497","Type":"ContainerStarted","Data":"99d3d02280bfca7a5246ae70c821aa688b12a580d8fba7cec8cd62cee063e53e"} Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.337775 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-dddw7"] Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.437678 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e289e852-bef3-4376-8a15-b94339b1a3a3-memberlist\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:48:00 crc kubenswrapper[4775]: E1125 19:48:00.438064 4775 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 19:48:00 crc kubenswrapper[4775]: E1125 19:48:00.438179 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e289e852-bef3-4376-8a15-b94339b1a3a3-memberlist podName:e289e852-bef3-4376-8a15-b94339b1a3a3 nodeName:}" failed. No retries permitted until 2025-11-25 19:48:01.438165817 +0000 UTC m=+863.354528183 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e289e852-bef3-4376-8a15-b94339b1a3a3-memberlist") pod "speaker-7p6bq" (UID: "e289e852-bef3-4376-8a15-b94339b1a3a3") : secret "metallb-memberlist" not found Nov 25 19:48:00 crc kubenswrapper[4775]: I1125 19:48:00.482971 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb"] Nov 25 19:48:00 crc kubenswrapper[4775]: W1125 19:48:00.486881 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c2fb5c5_5143_45f7_bcef_e6374fb45624.slice/crio-73a476a3230a4e116e9f10a4da5731ee89df3d712fac7f63ed5841d6810fc2ad WatchSource:0}: Error finding container 73a476a3230a4e116e9f10a4da5731ee89df3d712fac7f63ed5841d6810fc2ad: Status 404 returned error can't find the container with id 73a476a3230a4e116e9f10a4da5731ee89df3d712fac7f63ed5841d6810fc2ad Nov 25 19:48:01 crc kubenswrapper[4775]: I1125 19:48:01.282287 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb" event={"ID":"9c2fb5c5-5143-45f7-bcef-e6374fb45624","Type":"ContainerStarted","Data":"73a476a3230a4e116e9f10a4da5731ee89df3d712fac7f63ed5841d6810fc2ad"} Nov 25 19:48:01 crc kubenswrapper[4775]: I1125 19:48:01.285763 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-dddw7" event={"ID":"52e5e476-3337-4715-9e67-b7230874d2d4","Type":"ContainerStarted","Data":"96ead1e2bc2461751cdffb2a1495eceb4cb520bedbcfaca042f178ea6b34c432"} Nov 25 19:48:01 crc kubenswrapper[4775]: I1125 19:48:01.285906 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-dddw7" event={"ID":"52e5e476-3337-4715-9e67-b7230874d2d4","Type":"ContainerStarted","Data":"72428145d83108367795fb23a0798d40bbed78452808e4717349054df0d11b72"} Nov 25 19:48:01 crc kubenswrapper[4775]: I1125 19:48:01.285933 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-dddw7" event={"ID":"52e5e476-3337-4715-9e67-b7230874d2d4","Type":"ContainerStarted","Data":"3972b1a3f9bf48160f48780bcf63fd2616dd1d3b6bbea9decf4ec8e3a895ecb2"} Nov 25 19:48:01 crc kubenswrapper[4775]: I1125 19:48:01.286006 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-dddw7" Nov 25 19:48:01 crc kubenswrapper[4775]: I1125 19:48:01.448961 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e289e852-bef3-4376-8a15-b94339b1a3a3-memberlist\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:48:01 crc kubenswrapper[4775]: I1125 19:48:01.466167 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e289e852-bef3-4376-8a15-b94339b1a3a3-memberlist\") pod \"speaker-7p6bq\" (UID: \"e289e852-bef3-4376-8a15-b94339b1a3a3\") " pod="metallb-system/speaker-7p6bq" Nov 25 19:48:01 crc kubenswrapper[4775]: I1125 19:48:01.629162 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7p6bq" Nov 25 19:48:01 crc kubenswrapper[4775]: W1125 19:48:01.658351 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode289e852_bef3_4376_8a15_b94339b1a3a3.slice/crio-ab19d1584a817a87079c2108af851795f24e8749e18f587ee0a154db82716a2c WatchSource:0}: Error finding container ab19d1584a817a87079c2108af851795f24e8749e18f587ee0a154db82716a2c: Status 404 returned error can't find the container with id ab19d1584a817a87079c2108af851795f24e8749e18f587ee0a154db82716a2c Nov 25 19:48:02 crc kubenswrapper[4775]: I1125 19:48:02.294189 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7p6bq" event={"ID":"e289e852-bef3-4376-8a15-b94339b1a3a3","Type":"ContainerStarted","Data":"1c714accc12610323ccd0affa532e998bb9181c7fbdbbeab9c10ad07156dcc45"} Nov 25 19:48:02 crc kubenswrapper[4775]: I1125 19:48:02.294233 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7p6bq" event={"ID":"e289e852-bef3-4376-8a15-b94339b1a3a3","Type":"ContainerStarted","Data":"a13d1df89d3c3f51392d4d2700b36b2e261cf7ad4081840ab4846f93b8d5b9e5"} Nov 25 19:48:02 crc kubenswrapper[4775]: I1125 19:48:02.294246 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7p6bq" event={"ID":"e289e852-bef3-4376-8a15-b94339b1a3a3","Type":"ContainerStarted","Data":"ab19d1584a817a87079c2108af851795f24e8749e18f587ee0a154db82716a2c"} Nov 25 19:48:02 crc kubenswrapper[4775]: I1125 19:48:02.294753 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7p6bq" Nov 25 19:48:02 crc kubenswrapper[4775]: I1125 19:48:02.327624 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-dddw7" podStartSLOduration=3.327605863 podStartE2EDuration="3.327605863s" podCreationTimestamp="2025-11-25 19:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:48:01.31348807 +0000 UTC m=+863.229850426" watchObservedRunningTime="2025-11-25 19:48:02.327605863 +0000 UTC m=+864.243968249" Nov 25 19:48:08 crc kubenswrapper[4775]: I1125 19:48:08.356104 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb" event={"ID":"9c2fb5c5-5143-45f7-bcef-e6374fb45624","Type":"ContainerStarted","Data":"3fddcee2f89d8be662c97172adb0c9994c80373a76bb0e654c6f9f70a69c634d"} Nov 25 19:48:08 crc kubenswrapper[4775]: I1125 19:48:08.356693 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb" Nov 25 19:48:08 crc kubenswrapper[4775]: I1125 19:48:08.359759 4775 generic.go:334] "Generic (PLEG): container finished" podID="0d8756c7-051e-4ab6-bd7b-32a5f2646497" containerID="bf8581910eed53f1798d5058bbaa1f4fd53c810c938e80c23b931f6c5572d9e0" exitCode=0 Nov 25 19:48:08 crc kubenswrapper[4775]: I1125 19:48:08.359857 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b9hkl" event={"ID":"0d8756c7-051e-4ab6-bd7b-32a5f2646497","Type":"ContainerDied","Data":"bf8581910eed53f1798d5058bbaa1f4fd53c810c938e80c23b931f6c5572d9e0"} Nov 25 19:48:08 crc kubenswrapper[4775]: I1125 19:48:08.384349 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb" podStartSLOduration=2.552207669 podStartE2EDuration="9.384322751s" podCreationTimestamp="2025-11-25 19:47:59 +0000 UTC" firstStartedPulling="2025-11-25 19:48:00.493006548 +0000 UTC m=+862.409368924" lastFinishedPulling="2025-11-25 19:48:07.3251216 +0000 UTC m=+869.241484006" observedRunningTime="2025-11-25 19:48:08.378940096 +0000 UTC m=+870.295302492" watchObservedRunningTime="2025-11-25 19:48:08.384322751 +0000 UTC m=+870.300685157" Nov 25 19:48:08 crc kubenswrapper[4775]: I1125 19:48:08.385751 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7p6bq" podStartSLOduration=9.385715819 podStartE2EDuration="9.385715819s" podCreationTimestamp="2025-11-25 19:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:48:02.326263877 +0000 UTC m=+864.242626243" watchObservedRunningTime="2025-11-25 19:48:08.385715819 +0000 UTC m=+870.302078215" Nov 25 19:48:09 crc kubenswrapper[4775]: I1125 19:48:09.370882 4775 generic.go:334] "Generic (PLEG): container finished" podID="0d8756c7-051e-4ab6-bd7b-32a5f2646497" containerID="f237eae570e4c08b0e4e75db6c17009edc59ad2a780039c5f756d508aba775ca" exitCode=0 Nov 25 19:48:09 crc kubenswrapper[4775]: I1125 19:48:09.370955 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b9hkl" event={"ID":"0d8756c7-051e-4ab6-bd7b-32a5f2646497","Type":"ContainerDied","Data":"f237eae570e4c08b0e4e75db6c17009edc59ad2a780039c5f756d508aba775ca"} Nov 25 19:48:10 crc kubenswrapper[4775]: I1125 19:48:10.152874 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-dddw7" Nov 25 19:48:10 crc kubenswrapper[4775]: I1125 19:48:10.379295 4775 generic.go:334] "Generic (PLEG): container finished" podID="0d8756c7-051e-4ab6-bd7b-32a5f2646497" containerID="a4d34b415a14f2444cb0ac7ea6f4ac78d2c9551c8ef191e2111388370b9cbe71" exitCode=0 Nov 25 19:48:10 crc kubenswrapper[4775]: I1125 19:48:10.379352 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b9hkl" event={"ID":"0d8756c7-051e-4ab6-bd7b-32a5f2646497","Type":"ContainerDied","Data":"a4d34b415a14f2444cb0ac7ea6f4ac78d2c9551c8ef191e2111388370b9cbe71"} Nov 25 19:48:11 crc kubenswrapper[4775]: I1125 19:48:11.396409 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b9hkl" event={"ID":"0d8756c7-051e-4ab6-bd7b-32a5f2646497","Type":"ContainerStarted","Data":"1e9f4179e75555a5fff042a9b1456e487bf90a0f01cd02f1e8f2fc4a86bfe9ce"} Nov 25 19:48:11 crc kubenswrapper[4775]: I1125 19:48:11.396690 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b9hkl" event={"ID":"0d8756c7-051e-4ab6-bd7b-32a5f2646497","Type":"ContainerStarted","Data":"f1d7b623f4d6dd7650cf949ece8d40058f825d5fe41491116fda24283e3a2865"} Nov 25 19:48:11 crc kubenswrapper[4775]: I1125 19:48:11.396707 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b9hkl" event={"ID":"0d8756c7-051e-4ab6-bd7b-32a5f2646497","Type":"ContainerStarted","Data":"b8680e0847685033e9b2ccf7f03fbdd8c2eefa04d53da083ad242fd024b07921"} Nov 25 19:48:11 crc kubenswrapper[4775]: I1125 19:48:11.396718 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b9hkl" event={"ID":"0d8756c7-051e-4ab6-bd7b-32a5f2646497","Type":"ContainerStarted","Data":"be340cb5328bf95099fd639f3cea19b866f6a84ece2fe692a14603dfcdb0ee00"} Nov 25 19:48:11 crc kubenswrapper[4775]: I1125 19:48:11.396729 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b9hkl" event={"ID":"0d8756c7-051e-4ab6-bd7b-32a5f2646497","Type":"ContainerStarted","Data":"29a48726436d84548af479bfd3fbd14190c62d1686558fa8d679139e9a424e56"} Nov 25 19:48:11 crc kubenswrapper[4775]: I1125 19:48:11.632575 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7p6bq" Nov 25 19:48:12 crc kubenswrapper[4775]: I1125 19:48:12.411516 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b9hkl" event={"ID":"0d8756c7-051e-4ab6-bd7b-32a5f2646497","Type":"ContainerStarted","Data":"51c118748938b4f106bc49eba4b789171717e8db794349b56a14bcd7b9392b55"} Nov 25 19:48:12 crc kubenswrapper[4775]: I1125 19:48:12.411963 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:48:12 crc kubenswrapper[4775]: I1125 19:48:12.447269 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-b9hkl" podStartSLOduration=6.326325577 podStartE2EDuration="13.447244052s" podCreationTimestamp="2025-11-25 19:47:59 +0000 UTC" firstStartedPulling="2025-11-25 19:48:00.169429375 +0000 UTC m=+862.085791741" lastFinishedPulling="2025-11-25 19:48:07.29034781 +0000 UTC m=+869.206710216" observedRunningTime="2025-11-25 19:48:12.441926868 +0000 UTC m=+874.358289264" watchObservedRunningTime="2025-11-25 19:48:12.447244052 +0000 UTC m=+874.363606428" Nov 25 19:48:15 crc kubenswrapper[4775]: I1125 19:48:15.066123 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:48:15 crc kubenswrapper[4775]: I1125 19:48:15.115362 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:48:18 crc kubenswrapper[4775]: I1125 19:48:18.445466 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sbx5l"] Nov 25 19:48:18 crc kubenswrapper[4775]: I1125 19:48:18.446371 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sbx5l" Nov 25 19:48:18 crc kubenswrapper[4775]: I1125 19:48:18.449174 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-clpjb" Nov 25 19:48:18 crc kubenswrapper[4775]: I1125 19:48:18.449713 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 19:48:18 crc kubenswrapper[4775]: I1125 19:48:18.450622 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 19:48:18 crc kubenswrapper[4775]: I1125 19:48:18.455326 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sbx5l"] Nov 25 19:48:18 crc kubenswrapper[4775]: I1125 19:48:18.592603 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6qhv\" (UniqueName: \"kubernetes.io/projected/98ddeb43-7cad-4125-9945-9d152c7df25b-kube-api-access-b6qhv\") pod \"openstack-operator-index-sbx5l\" (UID: \"98ddeb43-7cad-4125-9945-9d152c7df25b\") " pod="openstack-operators/openstack-operator-index-sbx5l" Nov 25 19:48:18 crc kubenswrapper[4775]: I1125 19:48:18.694077 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6qhv\" (UniqueName: \"kubernetes.io/projected/98ddeb43-7cad-4125-9945-9d152c7df25b-kube-api-access-b6qhv\") pod \"openstack-operator-index-sbx5l\" (UID: \"98ddeb43-7cad-4125-9945-9d152c7df25b\") " pod="openstack-operators/openstack-operator-index-sbx5l" Nov 25 19:48:18 crc kubenswrapper[4775]: I1125 19:48:18.723428 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6qhv\" (UniqueName: \"kubernetes.io/projected/98ddeb43-7cad-4125-9945-9d152c7df25b-kube-api-access-b6qhv\") pod \"openstack-operator-index-sbx5l\" (UID: \"98ddeb43-7cad-4125-9945-9d152c7df25b\") " pod="openstack-operators/openstack-operator-index-sbx5l" Nov 25 19:48:18 crc kubenswrapper[4775]: I1125 19:48:18.765392 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sbx5l" Nov 25 19:48:19 crc kubenswrapper[4775]: I1125 19:48:19.187408 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sbx5l"] Nov 25 19:48:19 crc kubenswrapper[4775]: I1125 19:48:19.466179 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sbx5l" event={"ID":"98ddeb43-7cad-4125-9945-9d152c7df25b","Type":"ContainerStarted","Data":"221c7f170326d83694afeff85f2d8bcda5bb8de97a7410bf82e71f39236eeff5"} Nov 25 19:48:20 crc kubenswrapper[4775]: I1125 19:48:20.061243 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-ngmrb" Nov 25 19:48:20 crc kubenswrapper[4775]: I1125 19:48:20.069951 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-b9hkl" Nov 25 19:48:22 crc kubenswrapper[4775]: I1125 19:48:22.489919 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sbx5l" event={"ID":"98ddeb43-7cad-4125-9945-9d152c7df25b","Type":"ContainerStarted","Data":"6409443bdafc078c75e9d8418a79285e12aaa3f1e93a5da511ac6d5ebee8836e"} Nov 25 19:48:22 crc kubenswrapper[4775]: I1125 19:48:22.512868 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sbx5l" podStartSLOduration=2.227090533 podStartE2EDuration="4.512842839s" podCreationTimestamp="2025-11-25 19:48:18 +0000 UTC" firstStartedPulling="2025-11-25 19:48:19.208435426 +0000 UTC m=+881.124797832" lastFinishedPulling="2025-11-25 19:48:21.494187772 +0000 UTC m=+883.410550138" observedRunningTime="2025-11-25 19:48:22.51214992 +0000 UTC m=+884.428512356" watchObservedRunningTime="2025-11-25 19:48:22.512842839 +0000 UTC m=+884.429205245" Nov 25 19:48:28 crc kubenswrapper[4775]: I1125 19:48:28.765723 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-sbx5l" Nov 25 19:48:28 crc kubenswrapper[4775]: I1125 19:48:28.766415 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-sbx5l" Nov 25 19:48:28 crc kubenswrapper[4775]: I1125 19:48:28.812899 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-sbx5l" Nov 25 19:48:29 crc kubenswrapper[4775]: I1125 19:48:29.574423 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-sbx5l" Nov 25 19:48:31 crc kubenswrapper[4775]: I1125 19:48:31.684279 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh"] Nov 25 19:48:31 crc kubenswrapper[4775]: I1125 19:48:31.686571 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" Nov 25 19:48:31 crc kubenswrapper[4775]: I1125 19:48:31.689914 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-42fcc" Nov 25 19:48:31 crc kubenswrapper[4775]: I1125 19:48:31.701759 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh"] Nov 25 19:48:31 crc kubenswrapper[4775]: I1125 19:48:31.837574 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w99s\" (UniqueName: \"kubernetes.io/projected/b2f991e3-e547-4228-89ed-7229d3bf188a-kube-api-access-4w99s\") pod \"ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh\" (UID: \"b2f991e3-e547-4228-89ed-7229d3bf188a\") " pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" Nov 25 19:48:31 crc kubenswrapper[4775]: I1125 19:48:31.837999 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2f991e3-e547-4228-89ed-7229d3bf188a-util\") pod \"ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh\" (UID: \"b2f991e3-e547-4228-89ed-7229d3bf188a\") " pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" Nov 25 19:48:31 crc kubenswrapper[4775]: I1125 19:48:31.838350 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2f991e3-e547-4228-89ed-7229d3bf188a-bundle\") pod \"ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh\" (UID: \"b2f991e3-e547-4228-89ed-7229d3bf188a\") " pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" Nov 25 19:48:31 crc kubenswrapper[4775]: I1125 19:48:31.940293 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2f991e3-e547-4228-89ed-7229d3bf188a-bundle\") pod \"ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh\" (UID: \"b2f991e3-e547-4228-89ed-7229d3bf188a\") " pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" Nov 25 19:48:31 crc kubenswrapper[4775]: I1125 19:48:31.940461 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w99s\" (UniqueName: \"kubernetes.io/projected/b2f991e3-e547-4228-89ed-7229d3bf188a-kube-api-access-4w99s\") pod \"ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh\" (UID: \"b2f991e3-e547-4228-89ed-7229d3bf188a\") " pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" Nov 25 19:48:31 crc kubenswrapper[4775]: I1125 19:48:31.940622 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2f991e3-e547-4228-89ed-7229d3bf188a-util\") pod \"ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh\" (UID: \"b2f991e3-e547-4228-89ed-7229d3bf188a\") " pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" Nov 25 19:48:31 crc kubenswrapper[4775]: I1125 19:48:31.941132 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2f991e3-e547-4228-89ed-7229d3bf188a-bundle\") pod \"ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh\" (UID: \"b2f991e3-e547-4228-89ed-7229d3bf188a\") " pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" Nov 25 19:48:31 crc kubenswrapper[4775]: I1125 19:48:31.941190 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2f991e3-e547-4228-89ed-7229d3bf188a-util\") pod \"ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh\" (UID: \"b2f991e3-e547-4228-89ed-7229d3bf188a\") " pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" Nov 25 19:48:31 crc kubenswrapper[4775]: I1125 19:48:31.972586 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w99s\" (UniqueName: \"kubernetes.io/projected/b2f991e3-e547-4228-89ed-7229d3bf188a-kube-api-access-4w99s\") pod \"ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh\" (UID: \"b2f991e3-e547-4228-89ed-7229d3bf188a\") " pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" Nov 25 19:48:32 crc kubenswrapper[4775]: I1125 19:48:32.023341 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" Nov 25 19:48:32 crc kubenswrapper[4775]: I1125 19:48:32.520977 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh"] Nov 25 19:48:32 crc kubenswrapper[4775]: W1125 19:48:32.526792 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2f991e3_e547_4228_89ed_7229d3bf188a.slice/crio-85a376c5da8b4022d04e978068d3b4dc55c1b47a34d9ce2d831e6c3785d58b80 WatchSource:0}: Error finding container 85a376c5da8b4022d04e978068d3b4dc55c1b47a34d9ce2d831e6c3785d58b80: Status 404 returned error can't find the container with id 85a376c5da8b4022d04e978068d3b4dc55c1b47a34d9ce2d831e6c3785d58b80 Nov 25 19:48:32 crc kubenswrapper[4775]: I1125 19:48:32.567579 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" event={"ID":"b2f991e3-e547-4228-89ed-7229d3bf188a","Type":"ContainerStarted","Data":"85a376c5da8b4022d04e978068d3b4dc55c1b47a34d9ce2d831e6c3785d58b80"} Nov 25 19:48:33 crc kubenswrapper[4775]: I1125 19:48:33.581610 4775 generic.go:334] "Generic (PLEG): container finished" podID="b2f991e3-e547-4228-89ed-7229d3bf188a" containerID="621b11a0bbb5b1652c50994b11bcaabc8a439fddae7793e0a31405d6986e5e47" exitCode=0 Nov 25 19:48:33 crc kubenswrapper[4775]: I1125 19:48:33.581723 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" event={"ID":"b2f991e3-e547-4228-89ed-7229d3bf188a","Type":"ContainerDied","Data":"621b11a0bbb5b1652c50994b11bcaabc8a439fddae7793e0a31405d6986e5e47"} Nov 25 19:48:34 crc kubenswrapper[4775]: I1125 19:48:34.596835 4775 generic.go:334] "Generic (PLEG): container finished" podID="b2f991e3-e547-4228-89ed-7229d3bf188a" containerID="ef84dde8a778e85794e8a07cf6cbdd0e9e564e17c1f4b9e17a82454025e6348b" exitCode=0 Nov 25 19:48:34 crc kubenswrapper[4775]: I1125 19:48:34.596908 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" event={"ID":"b2f991e3-e547-4228-89ed-7229d3bf188a","Type":"ContainerDied","Data":"ef84dde8a778e85794e8a07cf6cbdd0e9e564e17c1f4b9e17a82454025e6348b"} Nov 25 19:48:35 crc kubenswrapper[4775]: I1125 19:48:35.606777 4775 generic.go:334] "Generic (PLEG): container finished" podID="b2f991e3-e547-4228-89ed-7229d3bf188a" containerID="8bc79c9547bc1fccacc09e53e6425481de0f655ab3f5b730385736b7417e35d8" exitCode=0 Nov 25 19:48:35 crc kubenswrapper[4775]: I1125 19:48:35.606884 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" event={"ID":"b2f991e3-e547-4228-89ed-7229d3bf188a","Type":"ContainerDied","Data":"8bc79c9547bc1fccacc09e53e6425481de0f655ab3f5b730385736b7417e35d8"} Nov 25 19:48:36 crc kubenswrapper[4775]: I1125 19:48:36.963050 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" Nov 25 19:48:37 crc kubenswrapper[4775]: I1125 19:48:37.114286 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2f991e3-e547-4228-89ed-7229d3bf188a-bundle\") pod \"b2f991e3-e547-4228-89ed-7229d3bf188a\" (UID: \"b2f991e3-e547-4228-89ed-7229d3bf188a\") " Nov 25 19:48:37 crc kubenswrapper[4775]: I1125 19:48:37.114434 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2f991e3-e547-4228-89ed-7229d3bf188a-util\") pod \"b2f991e3-e547-4228-89ed-7229d3bf188a\" (UID: \"b2f991e3-e547-4228-89ed-7229d3bf188a\") " Nov 25 19:48:37 crc kubenswrapper[4775]: I1125 19:48:37.114544 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w99s\" (UniqueName: \"kubernetes.io/projected/b2f991e3-e547-4228-89ed-7229d3bf188a-kube-api-access-4w99s\") pod \"b2f991e3-e547-4228-89ed-7229d3bf188a\" (UID: \"b2f991e3-e547-4228-89ed-7229d3bf188a\") " Nov 25 19:48:37 crc kubenswrapper[4775]: I1125 19:48:37.115787 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2f991e3-e547-4228-89ed-7229d3bf188a-bundle" (OuterVolumeSpecName: "bundle") pod "b2f991e3-e547-4228-89ed-7229d3bf188a" (UID: "b2f991e3-e547-4228-89ed-7229d3bf188a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:48:37 crc kubenswrapper[4775]: I1125 19:48:37.125872 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2f991e3-e547-4228-89ed-7229d3bf188a-kube-api-access-4w99s" (OuterVolumeSpecName: "kube-api-access-4w99s") pod "b2f991e3-e547-4228-89ed-7229d3bf188a" (UID: "b2f991e3-e547-4228-89ed-7229d3bf188a"). InnerVolumeSpecName "kube-api-access-4w99s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:48:37 crc kubenswrapper[4775]: I1125 19:48:37.129608 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2f991e3-e547-4228-89ed-7229d3bf188a-util" (OuterVolumeSpecName: "util") pod "b2f991e3-e547-4228-89ed-7229d3bf188a" (UID: "b2f991e3-e547-4228-89ed-7229d3bf188a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:48:37 crc kubenswrapper[4775]: I1125 19:48:37.216812 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w99s\" (UniqueName: \"kubernetes.io/projected/b2f991e3-e547-4228-89ed-7229d3bf188a-kube-api-access-4w99s\") on node \"crc\" DevicePath \"\"" Nov 25 19:48:37 crc kubenswrapper[4775]: I1125 19:48:37.216860 4775 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2f991e3-e547-4228-89ed-7229d3bf188a-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:48:37 crc kubenswrapper[4775]: I1125 19:48:37.216872 4775 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2f991e3-e547-4228-89ed-7229d3bf188a-util\") on node \"crc\" DevicePath \"\"" Nov 25 19:48:37 crc kubenswrapper[4775]: I1125 19:48:37.625383 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" event={"ID":"b2f991e3-e547-4228-89ed-7229d3bf188a","Type":"ContainerDied","Data":"85a376c5da8b4022d04e978068d3b4dc55c1b47a34d9ce2d831e6c3785d58b80"} Nov 25 19:48:37 crc kubenswrapper[4775]: I1125 19:48:37.625418 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85a376c5da8b4022d04e978068d3b4dc55c1b47a34d9ce2d831e6c3785d58b80" Nov 25 19:48:37 crc kubenswrapper[4775]: I1125 19:48:37.625454 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh" Nov 25 19:48:41 crc kubenswrapper[4775]: I1125 19:48:41.069817 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:48:41 crc kubenswrapper[4775]: I1125 19:48:41.070377 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:48:45 crc kubenswrapper[4775]: I1125 19:48:45.182666 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-cd684d8f4-wqnhf"] Nov 25 19:48:45 crc kubenswrapper[4775]: E1125 19:48:45.183972 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f991e3-e547-4228-89ed-7229d3bf188a" containerName="util" Nov 25 19:48:45 crc kubenswrapper[4775]: I1125 19:48:45.184050 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f991e3-e547-4228-89ed-7229d3bf188a" containerName="util" Nov 25 19:48:45 crc kubenswrapper[4775]: E1125 19:48:45.184116 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f991e3-e547-4228-89ed-7229d3bf188a" containerName="pull" Nov 25 19:48:45 crc kubenswrapper[4775]: I1125 19:48:45.184171 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f991e3-e547-4228-89ed-7229d3bf188a" containerName="pull" Nov 25 19:48:45 crc kubenswrapper[4775]: E1125 19:48:45.184236 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f991e3-e547-4228-89ed-7229d3bf188a" containerName="extract" Nov 25 19:48:45 crc kubenswrapper[4775]: I1125 19:48:45.184289 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f991e3-e547-4228-89ed-7229d3bf188a" containerName="extract" Nov 25 19:48:45 crc kubenswrapper[4775]: I1125 19:48:45.184447 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f991e3-e547-4228-89ed-7229d3bf188a" containerName="extract" Nov 25 19:48:45 crc kubenswrapper[4775]: I1125 19:48:45.184895 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-cd684d8f4-wqnhf" Nov 25 19:48:45 crc kubenswrapper[4775]: I1125 19:48:45.187957 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-kcn65" Nov 25 19:48:45 crc kubenswrapper[4775]: I1125 19:48:45.234368 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-cd684d8f4-wqnhf"] Nov 25 19:48:45 crc kubenswrapper[4775]: I1125 19:48:45.341531 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4fsb\" (UniqueName: \"kubernetes.io/projected/265ba024-3e37-4150-a70e-80cd60462c3c-kube-api-access-x4fsb\") pod \"openstack-operator-controller-operator-cd684d8f4-wqnhf\" (UID: \"265ba024-3e37-4150-a70e-80cd60462c3c\") " pod="openstack-operators/openstack-operator-controller-operator-cd684d8f4-wqnhf" Nov 25 19:48:45 crc kubenswrapper[4775]: I1125 19:48:45.442633 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4fsb\" (UniqueName: \"kubernetes.io/projected/265ba024-3e37-4150-a70e-80cd60462c3c-kube-api-access-x4fsb\") pod \"openstack-operator-controller-operator-cd684d8f4-wqnhf\" (UID: \"265ba024-3e37-4150-a70e-80cd60462c3c\") " pod="openstack-operators/openstack-operator-controller-operator-cd684d8f4-wqnhf" Nov 25 19:48:45 crc kubenswrapper[4775]: I1125 19:48:45.473440 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4fsb\" (UniqueName: \"kubernetes.io/projected/265ba024-3e37-4150-a70e-80cd60462c3c-kube-api-access-x4fsb\") pod \"openstack-operator-controller-operator-cd684d8f4-wqnhf\" (UID: \"265ba024-3e37-4150-a70e-80cd60462c3c\") " pod="openstack-operators/openstack-operator-controller-operator-cd684d8f4-wqnhf" Nov 25 19:48:45 crc kubenswrapper[4775]: I1125 19:48:45.500825 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-cd684d8f4-wqnhf" Nov 25 19:48:45 crc kubenswrapper[4775]: I1125 19:48:45.720641 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-cd684d8f4-wqnhf"] Nov 25 19:48:46 crc kubenswrapper[4775]: I1125 19:48:46.694088 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-cd684d8f4-wqnhf" event={"ID":"265ba024-3e37-4150-a70e-80cd60462c3c","Type":"ContainerStarted","Data":"02a7700f13c289fa3b275930da16bacabd086325c10b97515a5e7d11ea2b5ae0"} Nov 25 19:48:49 crc kubenswrapper[4775]: I1125 19:48:49.722459 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-cd684d8f4-wqnhf" event={"ID":"265ba024-3e37-4150-a70e-80cd60462c3c","Type":"ContainerStarted","Data":"5f8c2e3296cc99d9dc1768cb924ad51d81ddd4a04197068890df5b194cbee28d"} Nov 25 19:48:49 crc kubenswrapper[4775]: I1125 19:48:49.723393 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-cd684d8f4-wqnhf" Nov 25 19:48:49 crc kubenswrapper[4775]: I1125 19:48:49.765570 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-cd684d8f4-wqnhf" podStartSLOduration=1.018865888 podStartE2EDuration="4.765548273s" podCreationTimestamp="2025-11-25 19:48:45 +0000 UTC" firstStartedPulling="2025-11-25 19:48:45.727794513 +0000 UTC m=+907.644156879" lastFinishedPulling="2025-11-25 19:48:49.474476898 +0000 UTC m=+911.390839264" observedRunningTime="2025-11-25 19:48:49.761057992 +0000 UTC m=+911.677420408" watchObservedRunningTime="2025-11-25 19:48:49.765548273 +0000 UTC m=+911.681910679" Nov 25 19:48:55 crc kubenswrapper[4775]: I1125 19:48:55.505711 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-cd684d8f4-wqnhf" Nov 25 19:49:06 crc kubenswrapper[4775]: I1125 19:49:06.732280 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b5cwf"] Nov 25 19:49:06 crc kubenswrapper[4775]: I1125 19:49:06.734084 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:06 crc kubenswrapper[4775]: I1125 19:49:06.749164 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5cwf"] Nov 25 19:49:06 crc kubenswrapper[4775]: I1125 19:49:06.861149 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83480cff-9fa1-4812-90a4-0bedd4ba5637-catalog-content\") pod \"community-operators-b5cwf\" (UID: \"83480cff-9fa1-4812-90a4-0bedd4ba5637\") " pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:06 crc kubenswrapper[4775]: I1125 19:49:06.861216 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92ckf\" (UniqueName: \"kubernetes.io/projected/83480cff-9fa1-4812-90a4-0bedd4ba5637-kube-api-access-92ckf\") pod \"community-operators-b5cwf\" (UID: \"83480cff-9fa1-4812-90a4-0bedd4ba5637\") " pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:06 crc kubenswrapper[4775]: I1125 19:49:06.861241 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83480cff-9fa1-4812-90a4-0bedd4ba5637-utilities\") pod \"community-operators-b5cwf\" (UID: \"83480cff-9fa1-4812-90a4-0bedd4ba5637\") " pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:06 crc kubenswrapper[4775]: I1125 19:49:06.962232 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83480cff-9fa1-4812-90a4-0bedd4ba5637-utilities\") pod \"community-operators-b5cwf\" (UID: \"83480cff-9fa1-4812-90a4-0bedd4ba5637\") " pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:06 crc kubenswrapper[4775]: I1125 19:49:06.962366 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83480cff-9fa1-4812-90a4-0bedd4ba5637-catalog-content\") pod \"community-operators-b5cwf\" (UID: \"83480cff-9fa1-4812-90a4-0bedd4ba5637\") " pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:06 crc kubenswrapper[4775]: I1125 19:49:06.962466 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92ckf\" (UniqueName: \"kubernetes.io/projected/83480cff-9fa1-4812-90a4-0bedd4ba5637-kube-api-access-92ckf\") pod \"community-operators-b5cwf\" (UID: \"83480cff-9fa1-4812-90a4-0bedd4ba5637\") " pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:06 crc kubenswrapper[4775]: I1125 19:49:06.962842 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83480cff-9fa1-4812-90a4-0bedd4ba5637-catalog-content\") pod \"community-operators-b5cwf\" (UID: \"83480cff-9fa1-4812-90a4-0bedd4ba5637\") " pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:06 crc kubenswrapper[4775]: I1125 19:49:06.962945 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83480cff-9fa1-4812-90a4-0bedd4ba5637-utilities\") pod \"community-operators-b5cwf\" (UID: \"83480cff-9fa1-4812-90a4-0bedd4ba5637\") " pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:07 crc kubenswrapper[4775]: I1125 19:49:07.007788 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92ckf\" (UniqueName: \"kubernetes.io/projected/83480cff-9fa1-4812-90a4-0bedd4ba5637-kube-api-access-92ckf\") pod \"community-operators-b5cwf\" (UID: \"83480cff-9fa1-4812-90a4-0bedd4ba5637\") " pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:07 crc kubenswrapper[4775]: I1125 19:49:07.061475 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:07 crc kubenswrapper[4775]: I1125 19:49:07.300091 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5cwf"] Nov 25 19:49:07 crc kubenswrapper[4775]: I1125 19:49:07.853867 4775 generic.go:334] "Generic (PLEG): container finished" podID="83480cff-9fa1-4812-90a4-0bedd4ba5637" containerID="2e738db3da0286215a0f49a87e6926035418e53ea885d99dbfc68e2ee63034c1" exitCode=0 Nov 25 19:49:07 crc kubenswrapper[4775]: I1125 19:49:07.853906 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5cwf" event={"ID":"83480cff-9fa1-4812-90a4-0bedd4ba5637","Type":"ContainerDied","Data":"2e738db3da0286215a0f49a87e6926035418e53ea885d99dbfc68e2ee63034c1"} Nov 25 19:49:07 crc kubenswrapper[4775]: I1125 19:49:07.854218 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5cwf" event={"ID":"83480cff-9fa1-4812-90a4-0bedd4ba5637","Type":"ContainerStarted","Data":"ea844ed3f29cf0f7ffcbcb082f2aaa3bd54a78f88f27b8e7bf5b0010449b26e1"} Nov 25 19:49:08 crc kubenswrapper[4775]: I1125 19:49:08.861402 4775 generic.go:334] "Generic (PLEG): container finished" podID="83480cff-9fa1-4812-90a4-0bedd4ba5637" containerID="3f53808e0badf80e1872a054cebe59d104fe063a191f3d3b8ceb860847c9ab66" exitCode=0 Nov 25 19:49:08 crc kubenswrapper[4775]: I1125 19:49:08.861480 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5cwf" event={"ID":"83480cff-9fa1-4812-90a4-0bedd4ba5637","Type":"ContainerDied","Data":"3f53808e0badf80e1872a054cebe59d104fe063a191f3d3b8ceb860847c9ab66"} Nov 25 19:49:09 crc kubenswrapper[4775]: I1125 19:49:09.869052 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5cwf" event={"ID":"83480cff-9fa1-4812-90a4-0bedd4ba5637","Type":"ContainerStarted","Data":"63f59c2b40345dcfff315bc825cb9b7177bd9528cf791444dcf2cdfaffe2ee50"} Nov 25 19:49:09 crc kubenswrapper[4775]: I1125 19:49:09.893230 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b5cwf" podStartSLOduration=2.24301407 podStartE2EDuration="3.893213702s" podCreationTimestamp="2025-11-25 19:49:06 +0000 UTC" firstStartedPulling="2025-11-25 19:49:07.855685523 +0000 UTC m=+929.772047889" lastFinishedPulling="2025-11-25 19:49:09.505885155 +0000 UTC m=+931.422247521" observedRunningTime="2025-11-25 19:49:09.891283819 +0000 UTC m=+931.807646185" watchObservedRunningTime="2025-11-25 19:49:09.893213702 +0000 UTC m=+931.809576068" Nov 25 19:49:11 crc kubenswrapper[4775]: I1125 19:49:11.070172 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:49:11 crc kubenswrapper[4775]: I1125 19:49:11.070469 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.461278 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.462131 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.464201 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-7qwdr" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.482854 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.484028 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.488193 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-dn22j" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.491725 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.505989 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.522763 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-vn52j"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.523964 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-vn52j" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.527693 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.527721 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-khmtx" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.528680 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.533051 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-n28qj" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.533978 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-vn52j"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.538106 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.559157 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.560670 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.562454 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-75qpp" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.576131 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.584673 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.585706 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.588212 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-fbrhd" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.600467 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.601696 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.611013 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.611013 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zdmx2" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.617173 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.618167 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.623841 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-srjcq" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.631175 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.653548 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.654474 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t46b9\" (UniqueName: \"kubernetes.io/projected/ca39bca1-68fa-4d64-a929-1b3d013bb679-kube-api-access-t46b9\") pod \"glance-operator-controller-manager-84dfd86bd6-8nk5f\" (UID: \"ca39bca1-68fa-4d64-a929-1b3d013bb679\") " pod="openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.654524 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8888\" (UniqueName: \"kubernetes.io/projected/a778d0b3-0440-4c61-8a61-59524e36835e-kube-api-access-k8888\") pod \"barbican-operator-controller-manager-7b64f4fb85-w8459\" (UID: \"a778d0b3-0440-4c61-8a61-59524e36835e\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.654551 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sggn\" (UniqueName: \"kubernetes.io/projected/fb22768d-951e-4a69-bba6-8728e80e2935-kube-api-access-4sggn\") pod \"cinder-operator-controller-manager-6b7f75547b-mh8t8\" (UID: \"fb22768d-951e-4a69-bba6-8728e80e2935\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.654596 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v49zr\" (UniqueName: \"kubernetes.io/projected/f1338e2e-e4e6-4c4b-a410-72e2d1acab0d-kube-api-access-v49zr\") pod \"designate-operator-controller-manager-955677c94-vn52j\" (UID: \"f1338e2e-e4e6-4c4b-a410-72e2d1acab0d\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-vn52j" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.654632 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gqqv\" (UniqueName: \"kubernetes.io/projected/360afa93-07ee-47ad-beb7-cd45b9cc9bef-kube-api-access-5gqqv\") pod \"heat-operator-controller-manager-5b77f656f-knzcj\" (UID: \"360afa93-07ee-47ad-beb7-cd45b9cc9bef\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.681914 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.687231 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.697955 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.701681 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-k7mzk" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.705543 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.738520 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.739979 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.742023 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-dvvp5" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.755701 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.757003 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.760299 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rrfq9" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.765501 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.766812 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.768519 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-djt9q" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.772504 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvbft\" (UniqueName: \"kubernetes.io/projected/d8f444e1-3e73-4daa-a5f0-4fe2236a691b-kube-api-access-nvbft\") pod \"ironic-operator-controller-manager-67cb4dc6d4-86lv6\" (UID: \"d8f444e1-3e73-4daa-a5f0-4fe2236a691b\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.772674 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert\") pod \"infra-operator-controller-manager-57548d458d-tcv4j\" (UID: \"256bc456-e90c-4c18-8531-9d0470473b55\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.772721 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8888\" (UniqueName: \"kubernetes.io/projected/a778d0b3-0440-4c61-8a61-59524e36835e-kube-api-access-k8888\") pod \"barbican-operator-controller-manager-7b64f4fb85-w8459\" (UID: \"a778d0b3-0440-4c61-8a61-59524e36835e\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.772797 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sggn\" (UniqueName: \"kubernetes.io/projected/fb22768d-951e-4a69-bba6-8728e80e2935-kube-api-access-4sggn\") pod \"cinder-operator-controller-manager-6b7f75547b-mh8t8\" (UID: \"fb22768d-951e-4a69-bba6-8728e80e2935\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.772894 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sj4s\" (UniqueName: \"kubernetes.io/projected/6910455f-354f-4f91-8333-5cb54be87db6-kube-api-access-6sj4s\") pod \"keystone-operator-controller-manager-7b4567c7cf-hjdwf\" (UID: \"6910455f-354f-4f91-8333-5cb54be87db6\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.772956 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dlc5\" (UniqueName: \"kubernetes.io/projected/9a436d5c-4f54-479c-846f-11e5d66d91fa-kube-api-access-8dlc5\") pod \"neutron-operator-controller-manager-6fdcddb789-5nm9r\" (UID: \"9a436d5c-4f54-479c-846f-11e5d66d91fa\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.772984 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v49zr\" (UniqueName: \"kubernetes.io/projected/f1338e2e-e4e6-4c4b-a410-72e2d1acab0d-kube-api-access-v49zr\") pod \"designate-operator-controller-manager-955677c94-vn52j\" (UID: \"f1338e2e-e4e6-4c4b-a410-72e2d1acab0d\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-vn52j" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.773011 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s28fk\" (UniqueName: \"kubernetes.io/projected/256bc456-e90c-4c18-8531-9d0470473b55-kube-api-access-s28fk\") pod \"infra-operator-controller-manager-57548d458d-tcv4j\" (UID: \"256bc456-e90c-4c18-8531-9d0470473b55\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.773085 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2k9k\" (UniqueName: \"kubernetes.io/projected/88abb3bd-eb47-4185-a1a9-4f300ed99167-kube-api-access-h2k9k\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-sj9pg\" (UID: \"88abb3bd-eb47-4185-a1a9-4f300ed99167\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.773154 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fhk2\" (UniqueName: \"kubernetes.io/projected/e7a4f97f-5b6f-4347-b156-d96e1be21183-kube-api-access-5fhk2\") pod \"manila-operator-controller-manager-5d499bf58b-sd2lc\" (UID: \"e7a4f97f-5b6f-4347-b156-d96e1be21183\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.773203 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gqqv\" (UniqueName: \"kubernetes.io/projected/360afa93-07ee-47ad-beb7-cd45b9cc9bef-kube-api-access-5gqqv\") pod \"heat-operator-controller-manager-5b77f656f-knzcj\" (UID: \"360afa93-07ee-47ad-beb7-cd45b9cc9bef\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.773234 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zccfw\" (UniqueName: \"kubernetes.io/projected/08376459-180b-411f-9c74-c918980541f6-kube-api-access-zccfw\") pod \"horizon-operator-controller-manager-5d494799bf-892tw\" (UID: \"08376459-180b-411f-9c74-c918980541f6\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.773258 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t46b9\" (UniqueName: \"kubernetes.io/projected/ca39bca1-68fa-4d64-a929-1b3d013bb679-kube-api-access-t46b9\") pod \"glance-operator-controller-manager-84dfd86bd6-8nk5f\" (UID: \"ca39bca1-68fa-4d64-a929-1b3d013bb679\") " pod="openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.780505 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.781570 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.792559 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-sf6jv" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.796517 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.803759 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8888\" (UniqueName: \"kubernetes.io/projected/a778d0b3-0440-4c61-8a61-59524e36835e-kube-api-access-k8888\") pod \"barbican-operator-controller-manager-7b64f4fb85-w8459\" (UID: \"a778d0b3-0440-4c61-8a61-59524e36835e\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.803817 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.805518 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sggn\" (UniqueName: \"kubernetes.io/projected/fb22768d-951e-4a69-bba6-8728e80e2935-kube-api-access-4sggn\") pod \"cinder-operator-controller-manager-6b7f75547b-mh8t8\" (UID: \"fb22768d-951e-4a69-bba6-8728e80e2935\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.812369 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v49zr\" (UniqueName: \"kubernetes.io/projected/f1338e2e-e4e6-4c4b-a410-72e2d1acab0d-kube-api-access-v49zr\") pod \"designate-operator-controller-manager-955677c94-vn52j\" (UID: \"f1338e2e-e4e6-4c4b-a410-72e2d1acab0d\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-vn52j" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.823038 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t46b9\" (UniqueName: \"kubernetes.io/projected/ca39bca1-68fa-4d64-a929-1b3d013bb679-kube-api-access-t46b9\") pod \"glance-operator-controller-manager-84dfd86bd6-8nk5f\" (UID: \"ca39bca1-68fa-4d64-a929-1b3d013bb679\") " pod="openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.827770 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.833146 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gqqv\" (UniqueName: \"kubernetes.io/projected/360afa93-07ee-47ad-beb7-cd45b9cc9bef-kube-api-access-5gqqv\") pod \"heat-operator-controller-manager-5b77f656f-knzcj\" (UID: \"360afa93-07ee-47ad-beb7-cd45b9cc9bef\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.837748 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-vn52j" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.840745 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.841825 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.844867 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-n8pjl" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.850167 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.853336 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.874160 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert\") pod \"infra-operator-controller-manager-57548d458d-tcv4j\" (UID: \"256bc456-e90c-4c18-8531-9d0470473b55\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.874219 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlrvq\" (UniqueName: \"kubernetes.io/projected/74ce2e86-cedf-4014-8d4c-8c126d58e7c9-kube-api-access-xlrvq\") pod \"nova-operator-controller-manager-79556f57fc-rlx29\" (UID: \"74ce2e86-cedf-4014-8d4c-8c126d58e7c9\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.874253 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sj4s\" (UniqueName: \"kubernetes.io/projected/6910455f-354f-4f91-8333-5cb54be87db6-kube-api-access-6sj4s\") pod \"keystone-operator-controller-manager-7b4567c7cf-hjdwf\" (UID: \"6910455f-354f-4f91-8333-5cb54be87db6\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.874291 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dlc5\" (UniqueName: \"kubernetes.io/projected/9a436d5c-4f54-479c-846f-11e5d66d91fa-kube-api-access-8dlc5\") pod \"neutron-operator-controller-manager-6fdcddb789-5nm9r\" (UID: \"9a436d5c-4f54-479c-846f-11e5d66d91fa\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.874327 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s28fk\" (UniqueName: \"kubernetes.io/projected/256bc456-e90c-4c18-8531-9d0470473b55-kube-api-access-s28fk\") pod \"infra-operator-controller-manager-57548d458d-tcv4j\" (UID: \"256bc456-e90c-4c18-8531-9d0470473b55\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.874345 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2k9k\" (UniqueName: \"kubernetes.io/projected/88abb3bd-eb47-4185-a1a9-4f300ed99167-kube-api-access-h2k9k\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-sj9pg\" (UID: \"88abb3bd-eb47-4185-a1a9-4f300ed99167\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.874361 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fhk2\" (UniqueName: \"kubernetes.io/projected/e7a4f97f-5b6f-4347-b156-d96e1be21183-kube-api-access-5fhk2\") pod \"manila-operator-controller-manager-5d499bf58b-sd2lc\" (UID: \"e7a4f97f-5b6f-4347-b156-d96e1be21183\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.874389 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88w7n\" (UniqueName: \"kubernetes.io/projected/99a43674-e3dd-46c8-8fe7-b527112b3ff1-kube-api-access-88w7n\") pod \"octavia-operator-controller-manager-64cdc6ff96-qdknn\" (UID: \"99a43674-e3dd-46c8-8fe7-b527112b3ff1\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.874413 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zccfw\" (UniqueName: \"kubernetes.io/projected/08376459-180b-411f-9c74-c918980541f6-kube-api-access-zccfw\") pod \"horizon-operator-controller-manager-5d494799bf-892tw\" (UID: \"08376459-180b-411f-9c74-c918980541f6\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.874437 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvbft\" (UniqueName: \"kubernetes.io/projected/d8f444e1-3e73-4daa-a5f0-4fe2236a691b-kube-api-access-nvbft\") pod \"ironic-operator-controller-manager-67cb4dc6d4-86lv6\" (UID: \"d8f444e1-3e73-4daa-a5f0-4fe2236a691b\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6" Nov 25 19:49:13 crc kubenswrapper[4775]: E1125 19:49:13.874767 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 19:49:13 crc kubenswrapper[4775]: E1125 19:49:13.874817 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert podName:256bc456-e90c-4c18-8531-9d0470473b55 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:14.374801334 +0000 UTC m=+936.291163700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert") pod "infra-operator-controller-manager-57548d458d-tcv4j" (UID: "256bc456-e90c-4c18-8531-9d0470473b55") : secret "infra-operator-webhook-server-cert" not found Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.881992 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.899728 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.901625 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2k9k\" (UniqueName: \"kubernetes.io/projected/88abb3bd-eb47-4185-a1a9-4f300ed99167-kube-api-access-h2k9k\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-sj9pg\" (UID: \"88abb3bd-eb47-4185-a1a9-4f300ed99167\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.905494 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvbft\" (UniqueName: \"kubernetes.io/projected/d8f444e1-3e73-4daa-a5f0-4fe2236a691b-kube-api-access-nvbft\") pod \"ironic-operator-controller-manager-67cb4dc6d4-86lv6\" (UID: \"d8f444e1-3e73-4daa-a5f0-4fe2236a691b\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.906441 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s28fk\" (UniqueName: \"kubernetes.io/projected/256bc456-e90c-4c18-8531-9d0470473b55-kube-api-access-s28fk\") pod \"infra-operator-controller-manager-57548d458d-tcv4j\" (UID: \"256bc456-e90c-4c18-8531-9d0470473b55\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.906541 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fhk2\" (UniqueName: \"kubernetes.io/projected/e7a4f97f-5b6f-4347-b156-d96e1be21183-kube-api-access-5fhk2\") pod \"manila-operator-controller-manager-5d499bf58b-sd2lc\" (UID: \"e7a4f97f-5b6f-4347-b156-d96e1be21183\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.906630 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.907617 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sj4s\" (UniqueName: \"kubernetes.io/projected/6910455f-354f-4f91-8333-5cb54be87db6-kube-api-access-6sj4s\") pod \"keystone-operator-controller-manager-7b4567c7cf-hjdwf\" (UID: \"6910455f-354f-4f91-8333-5cb54be87db6\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.907859 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.909533 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dlc5\" (UniqueName: \"kubernetes.io/projected/9a436d5c-4f54-479c-846f-11e5d66d91fa-kube-api-access-8dlc5\") pod \"neutron-operator-controller-manager-6fdcddb789-5nm9r\" (UID: \"9a436d5c-4f54-479c-846f-11e5d66d91fa\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.909968 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zccfw\" (UniqueName: \"kubernetes.io/projected/08376459-180b-411f-9c74-c918980541f6-kube-api-access-zccfw\") pod \"horizon-operator-controller-manager-5d494799bf-892tw\" (UID: \"08376459-180b-411f-9c74-c918980541f6\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.911772 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-55q6j" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.911949 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.914833 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.928300 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.929444 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.932389 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-w649g" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.940919 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.943824 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.944265 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.953281 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.959220 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-lbhdh" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.967712 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.968938 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.973056 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9546w" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.974882 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfdkp\" (UniqueName: \"kubernetes.io/projected/d9838469-3633-4b7d-88dc-0a6fd8c272ce-kube-api-access-jfdkp\") pod \"ovn-operator-controller-manager-56897c768d-p6c9k\" (UID: \"d9838469-3633-4b7d-88dc-0a6fd8c272ce\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.974922 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlrvq\" (UniqueName: \"kubernetes.io/projected/74ce2e86-cedf-4014-8d4c-8c126d58e7c9-kube-api-access-xlrvq\") pod \"nova-operator-controller-manager-79556f57fc-rlx29\" (UID: \"74ce2e86-cedf-4014-8d4c-8c126d58e7c9\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.974958 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv\" (UID: \"7b75a9f0-bd88-4e53-973a-0ce97e41cec8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.976753 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwdpx\" (UniqueName: \"kubernetes.io/projected/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-kube-api-access-rwdpx\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv\" (UID: \"7b75a9f0-bd88-4e53-973a-0ce97e41cec8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.976783 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2rdm\" (UniqueName: \"kubernetes.io/projected/d01f394d-062f-4736-a7fa-abe501a5b2d9-kube-api-access-q2rdm\") pod \"swift-operator-controller-manager-d77b94747-w2rwh\" (UID: \"d01f394d-062f-4736-a7fa-abe501a5b2d9\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.976804 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88w7n\" (UniqueName: \"kubernetes.io/projected/99a43674-e3dd-46c8-8fe7-b527112b3ff1-kube-api-access-88w7n\") pod \"octavia-operator-controller-manager-64cdc6ff96-qdknn\" (UID: \"99a43674-e3dd-46c8-8fe7-b527112b3ff1\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.979905 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mnj4\" (UniqueName: \"kubernetes.io/projected/fdbde397-fc85-41aa-915f-3b8d77553adc-kube-api-access-4mnj4\") pod \"placement-operator-controller-manager-57988cc5b5-snkf4\" (UID: \"fdbde397-fc85-41aa-915f-3b8d77553adc\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4" Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.980361 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.984809 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh"] Nov 25 19:49:13 crc kubenswrapper[4775]: I1125 19:49:13.995124 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp"] Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.002090 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88w7n\" (UniqueName: \"kubernetes.io/projected/99a43674-e3dd-46c8-8fe7-b527112b3ff1-kube-api-access-88w7n\") pod \"octavia-operator-controller-manager-64cdc6ff96-qdknn\" (UID: \"99a43674-e3dd-46c8-8fe7-b527112b3ff1\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.008848 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.012117 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-2cfz6" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.013324 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.017117 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlrvq\" (UniqueName: \"kubernetes.io/projected/74ce2e86-cedf-4014-8d4c-8c126d58e7c9-kube-api-access-xlrvq\") pod \"nova-operator-controller-manager-79556f57fc-rlx29\" (UID: \"74ce2e86-cedf-4014-8d4c-8c126d58e7c9\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.023967 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp"] Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.073637 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.082592 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.084890 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwdpx\" (UniqueName: \"kubernetes.io/projected/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-kube-api-access-rwdpx\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv\" (UID: \"7b75a9f0-bd88-4e53-973a-0ce97e41cec8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.084930 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2rdm\" (UniqueName: \"kubernetes.io/projected/d01f394d-062f-4736-a7fa-abe501a5b2d9-kube-api-access-q2rdm\") pod \"swift-operator-controller-manager-d77b94747-w2rwh\" (UID: \"d01f394d-062f-4736-a7fa-abe501a5b2d9\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.084974 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdtsb\" (UniqueName: \"kubernetes.io/projected/8af71b48-ef6a-4e7f-8d32-e627f46a93ff-kube-api-access-qdtsb\") pod \"telemetry-operator-controller-manager-76cc84c6bb-qnklp\" (UID: \"8af71b48-ef6a-4e7f-8d32-e627f46a93ff\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.085000 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mnj4\" (UniqueName: \"kubernetes.io/projected/fdbde397-fc85-41aa-915f-3b8d77553adc-kube-api-access-4mnj4\") pod \"placement-operator-controller-manager-57988cc5b5-snkf4\" (UID: \"fdbde397-fc85-41aa-915f-3b8d77553adc\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.085046 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfdkp\" (UniqueName: \"kubernetes.io/projected/d9838469-3633-4b7d-88dc-0a6fd8c272ce-kube-api-access-jfdkp\") pod \"ovn-operator-controller-manager-56897c768d-p6c9k\" (UID: \"d9838469-3633-4b7d-88dc-0a6fd8c272ce\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.085091 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv\" (UID: \"7b75a9f0-bd88-4e53-973a-0ce97e41cec8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.085238 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.085306 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert podName:7b75a9f0-bd88-4e53-973a-0ce97e41cec8 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:14.585292582 +0000 UTC m=+936.501654948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" (UID: "7b75a9f0-bd88-4e53-973a-0ce97e41cec8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.092512 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.101102 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.103843 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.109125 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8"] Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.113685 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.116724 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cj8zf" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.121343 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfdkp\" (UniqueName: \"kubernetes.io/projected/d9838469-3633-4b7d-88dc-0a6fd8c272ce-kube-api-access-jfdkp\") pod \"ovn-operator-controller-manager-56897c768d-p6c9k\" (UID: \"d9838469-3633-4b7d-88dc-0a6fd8c272ce\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.128452 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8"] Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.129073 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2rdm\" (UniqueName: \"kubernetes.io/projected/d01f394d-062f-4736-a7fa-abe501a5b2d9-kube-api-access-q2rdm\") pod \"swift-operator-controller-manager-d77b94747-w2rwh\" (UID: \"d01f394d-062f-4736-a7fa-abe501a5b2d9\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.134809 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mnj4\" (UniqueName: \"kubernetes.io/projected/fdbde397-fc85-41aa-915f-3b8d77553adc-kube-api-access-4mnj4\") pod \"placement-operator-controller-manager-57988cc5b5-snkf4\" (UID: \"fdbde397-fc85-41aa-915f-3b8d77553adc\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.136255 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwdpx\" (UniqueName: \"kubernetes.io/projected/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-kube-api-access-rwdpx\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv\" (UID: \"7b75a9f0-bd88-4e53-973a-0ce97e41cec8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.167450 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh"] Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.168709 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.171115 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-j9djm" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.173186 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.187509 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdtsb\" (UniqueName: \"kubernetes.io/projected/8af71b48-ef6a-4e7f-8d32-e627f46a93ff-kube-api-access-qdtsb\") pod \"telemetry-operator-controller-manager-76cc84c6bb-qnklp\" (UID: \"8af71b48-ef6a-4e7f-8d32-e627f46a93ff\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.188321 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64w4z\" (UniqueName: \"kubernetes.io/projected/043fa652-c214-4428-877b-723905f53acb-kube-api-access-64w4z\") pod \"test-operator-controller-manager-5cd6c7f4c8-s5nz8\" (UID: \"043fa652-c214-4428-877b-723905f53acb\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.188458 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4crmp\" (UniqueName: \"kubernetes.io/projected/c368de49-6c69-4140-a8c2-21c7afc13031-kube-api-access-4crmp\") pod \"watcher-operator-controller-manager-656dcb59d4-mhvjh\" (UID: \"c368de49-6c69-4140-a8c2-21c7afc13031\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.198056 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh"] Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.208706 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.211226 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdtsb\" (UniqueName: \"kubernetes.io/projected/8af71b48-ef6a-4e7f-8d32-e627f46a93ff-kube-api-access-qdtsb\") pod \"telemetry-operator-controller-manager-76cc84c6bb-qnklp\" (UID: \"8af71b48-ef6a-4e7f-8d32-e627f46a93ff\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.225749 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl"] Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.227112 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.229704 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-8dkm9" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.229849 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl"] Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.231800 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.231990 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.232317 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.266746 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hg4df"] Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.268093 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hg4df" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.273059 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-gqrtt" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.290073 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hg4df"] Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.290794 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmccf\" (UniqueName: \"kubernetes.io/projected/592eda0a-f963-48bf-9902-3e52795051e3-kube-api-access-pmccf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hg4df\" (UID: \"592eda0a-f963-48bf-9902-3e52795051e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hg4df" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.290850 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64w4z\" (UniqueName: \"kubernetes.io/projected/043fa652-c214-4428-877b-723905f53acb-kube-api-access-64w4z\") pod \"test-operator-controller-manager-5cd6c7f4c8-s5nz8\" (UID: \"043fa652-c214-4428-877b-723905f53acb\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.290887 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4crmp\" (UniqueName: \"kubernetes.io/projected/c368de49-6c69-4140-a8c2-21c7afc13031-kube-api-access-4crmp\") pod \"watcher-operator-controller-manager-656dcb59d4-mhvjh\" (UID: \"c368de49-6c69-4140-a8c2-21c7afc13031\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.290918 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsbdr\" (UniqueName: \"kubernetes.io/projected/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-kube-api-access-qsbdr\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.290948 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.291000 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.309801 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.316513 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4crmp\" (UniqueName: \"kubernetes.io/projected/c368de49-6c69-4140-a8c2-21c7afc13031-kube-api-access-4crmp\") pod \"watcher-operator-controller-manager-656dcb59d4-mhvjh\" (UID: \"c368de49-6c69-4140-a8c2-21c7afc13031\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.332999 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.339366 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64w4z\" (UniqueName: \"kubernetes.io/projected/043fa652-c214-4428-877b-723905f53acb-kube-api-access-64w4z\") pod \"test-operator-controller-manager-5cd6c7f4c8-s5nz8\" (UID: \"043fa652-c214-4428-877b-723905f53acb\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.367508 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.392315 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.392375 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert\") pod \"infra-operator-controller-manager-57548d458d-tcv4j\" (UID: \"256bc456-e90c-4c18-8531-9d0470473b55\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.392402 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmccf\" (UniqueName: \"kubernetes.io/projected/592eda0a-f963-48bf-9902-3e52795051e3-kube-api-access-pmccf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hg4df\" (UID: \"592eda0a-f963-48bf-9902-3e52795051e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hg4df" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.392438 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsbdr\" (UniqueName: \"kubernetes.io/projected/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-kube-api-access-qsbdr\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.392465 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.392585 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.392633 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs podName:b5f009d3-7b77-49c1-b5f1-b8219b31ed47 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:14.892616717 +0000 UTC m=+936.808979083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs") pod "openstack-operator-controller-manager-7f84d9dcfc-rrwgl" (UID: "b5f009d3-7b77-49c1-b5f1-b8219b31ed47") : secret "metrics-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.392821 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.392900 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert podName:256bc456-e90c-4c18-8531-9d0470473b55 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:15.392881933 +0000 UTC m=+937.309244299 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert") pod "infra-operator-controller-manager-57548d458d-tcv4j" (UID: "256bc456-e90c-4c18-8531-9d0470473b55") : secret "infra-operator-webhook-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.392950 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.392973 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs podName:b5f009d3-7b77-49c1-b5f1-b8219b31ed47 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:14.892965146 +0000 UTC m=+936.809327512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs") pod "openstack-operator-controller-manager-7f84d9dcfc-rrwgl" (UID: "b5f009d3-7b77-49c1-b5f1-b8219b31ed47") : secret "webhook-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.415449 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsbdr\" (UniqueName: \"kubernetes.io/projected/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-kube-api-access-qsbdr\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.420614 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmccf\" (UniqueName: \"kubernetes.io/projected/592eda0a-f963-48bf-9902-3e52795051e3-kube-api-access-pmccf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hg4df\" (UID: \"592eda0a-f963-48bf-9902-3e52795051e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hg4df" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.441640 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hg4df" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.451955 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.472891 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.518624 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.590460 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-vn52j"] Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.594518 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f"] Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.608992 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv\" (UID: \"7b75a9f0-bd88-4e53-973a-0ce97e41cec8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.609200 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.609272 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert podName:7b75a9f0-bd88-4e53-973a-0ce97e41cec8 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:15.609254471 +0000 UTC m=+937.525616837 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" (UID: "7b75a9f0-bd88-4e53-973a-0ce97e41cec8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: W1125 19:49:14.651924 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1338e2e_e4e6_4c4b_a410_72e2d1acab0d.slice/crio-7c54b7ddaa98e7ebd07321a924d5db418f1334b99dc2b2ceca1c6449a4430e9a WatchSource:0}: Error finding container 7c54b7ddaa98e7ebd07321a924d5db418f1334b99dc2b2ceca1c6449a4430e9a: Status 404 returned error can't find the container with id 7c54b7ddaa98e7ebd07321a924d5db418f1334b99dc2b2ceca1c6449a4430e9a Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.834849 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj"] Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.964104 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.964231 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.964387 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.964442 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs podName:b5f009d3-7b77-49c1-b5f1-b8219b31ed47 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:15.964426608 +0000 UTC m=+937.880788974 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs") pod "openstack-operator-controller-manager-7f84d9dcfc-rrwgl" (UID: "b5f009d3-7b77-49c1-b5f1-b8219b31ed47") : secret "webhook-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.964536 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: E1125 19:49:14.964602 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs podName:b5f009d3-7b77-49c1-b5f1-b8219b31ed47 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:15.964584723 +0000 UTC m=+937.880947169 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs") pod "openstack-operator-controller-manager-7f84d9dcfc-rrwgl" (UID: "b5f009d3-7b77-49c1-b5f1-b8219b31ed47") : secret "metrics-server-cert" not found Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.991159 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f" event={"ID":"ca39bca1-68fa-4d64-a929-1b3d013bb679","Type":"ContainerStarted","Data":"bb626e7755d6aef94006fa07aa572c9c5c05f7cd4826b21a4682bea5ab45f913"} Nov 25 19:49:14 crc kubenswrapper[4775]: I1125 19:49:14.991194 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-vn52j" event={"ID":"f1338e2e-e4e6-4c4b-a410-72e2d1acab0d","Type":"ContainerStarted","Data":"7c54b7ddaa98e7ebd07321a924d5db418f1334b99dc2b2ceca1c6449a4430e9a"} Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.021131 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf"] Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.034017 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6"] Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.216370 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg"] Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.221852 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459"] Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.478770 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert\") pod \"infra-operator-controller-manager-57548d458d-tcv4j\" (UID: \"256bc456-e90c-4c18-8531-9d0470473b55\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.479031 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.479120 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert podName:256bc456-e90c-4c18-8531-9d0470473b55 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:17.479098416 +0000 UTC m=+939.395460782 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert") pod "infra-operator-controller-manager-57548d458d-tcv4j" (UID: "256bc456-e90c-4c18-8531-9d0470473b55") : secret "infra-operator-webhook-server-cert" not found Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.630846 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh"] Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.638950 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4"] Nov 25 19:49:15 crc kubenswrapper[4775]: W1125 19:49:15.644042 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc368de49_6c69_4140_a8c2_21c7afc13031.slice/crio-a3cb3a92073003d41090c107c7662706194dc8626f69da4249521684dd5cf7dd WatchSource:0}: Error finding container a3cb3a92073003d41090c107c7662706194dc8626f69da4249521684dd5cf7dd: Status 404 returned error can't find the container with id a3cb3a92073003d41090c107c7662706194dc8626f69da4249521684dd5cf7dd Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.650395 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8"] Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.659741 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r"] Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.679122 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k"] Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.687936 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv\" (UID: \"7b75a9f0-bd88-4e53-973a-0ce97e41cec8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.688144 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.688202 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert podName:7b75a9f0-bd88-4e53-973a-0ce97e41cec8 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:17.688185416 +0000 UTC m=+939.604547782 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" (UID: "7b75a9f0-bd88-4e53-973a-0ce97e41cec8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.701090 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hg4df"] Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.727495 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc"] Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.735543 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-88w7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-64cdc6ff96-qdknn_openstack-operators(99a43674-e3dd-46c8-8fe7-b527112b3ff1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.735793 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64w4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-s5nz8_openstack-operators(043fa652-c214-4428-877b-723905f53acb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.741501 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xlrvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-rlx29_openstack-operators(74ce2e86-cedf-4014-8d4c-8c126d58e7c9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.745947 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh"] Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.762361 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9413ed1bc2ae1a6bd28c59b1c7f7e91e1638de7b2a7d4729ed3fa2135182465d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zccfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5d494799bf-892tw_openstack-operators(08376459-180b-411f-9c74-c918980541f6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.762565 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdtsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-qnklp_openstack-operators(8af71b48-ef6a-4e7f-8d32-e627f46a93ff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.762658 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64w4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-s5nz8_openstack-operators(043fa652-c214-4428-877b-723905f53acb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.762727 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-88w7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-64cdc6ff96-qdknn_openstack-operators(99a43674-e3dd-46c8-8fe7-b527112b3ff1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.763813 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" podUID="99a43674-e3dd-46c8-8fe7-b527112b3ff1" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.763859 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" podUID="043fa652-c214-4428-877b-723905f53acb" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.764028 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xlrvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-rlx29_openstack-operators(74ce2e86-cedf-4014-8d4c-8c126d58e7c9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.765447 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" podUID="74ce2e86-cedf-4014-8d4c-8c126d58e7c9" Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.768800 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn"] Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.774593 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zccfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5d494799bf-892tw_openstack-operators(08376459-180b-411f-9c74-c918980541f6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.775004 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdtsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-qnklp_openstack-operators(8af71b48-ef6a-4e7f-8d32-e627f46a93ff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.775955 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" podUID="08376459-180b-411f-9c74-c918980541f6" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.776163 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" podUID="8af71b48-ef6a-4e7f-8d32-e627f46a93ff" Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.777119 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw"] Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.789150 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29"] Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.798227 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8"] Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.804582 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp"] Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.990953 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:15 crc kubenswrapper[4775]: I1125 19:49:15.991034 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.991158 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.991242 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs podName:b5f009d3-7b77-49c1-b5f1-b8219b31ed47 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:17.991224735 +0000 UTC m=+939.907587101 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs") pod "openstack-operator-controller-manager-7f84d9dcfc-rrwgl" (UID: "b5f009d3-7b77-49c1-b5f1-b8219b31ed47") : secret "metrics-server-cert" not found Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.991344 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 19:49:15 crc kubenswrapper[4775]: E1125 19:49:15.991756 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs podName:b5f009d3-7b77-49c1-b5f1-b8219b31ed47 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:17.991687987 +0000 UTC m=+939.908050433 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs") pod "openstack-operator-controller-manager-7f84d9dcfc-rrwgl" (UID: "b5f009d3-7b77-49c1-b5f1-b8219b31ed47") : secret "webhook-server-cert" not found Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.005046 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj" event={"ID":"360afa93-07ee-47ad-beb7-cd45b9cc9bef","Type":"ContainerStarted","Data":"1f41eefa3e9e295696c64c88bda87bd5028643c4a30bcc649786ee3053b4203a"} Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.007371 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc" event={"ID":"e7a4f97f-5b6f-4347-b156-d96e1be21183","Type":"ContainerStarted","Data":"d5e583f14925f0ea41174e8db3966050408b02fcf5510297d62d08e8b98ffad9"} Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.009322 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" event={"ID":"043fa652-c214-4428-877b-723905f53acb","Type":"ContainerStarted","Data":"5f3e761450a880cb03deed73da42f46a26fdaecd000014b1b0959af20012be3e"} Nov 25 19:49:16 crc kubenswrapper[4775]: E1125 19:49:16.014784 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" podUID="043fa652-c214-4428-877b-723905f53acb" Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.015512 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4" event={"ID":"fdbde397-fc85-41aa-915f-3b8d77553adc","Type":"ContainerStarted","Data":"857eae55aa8cabd6998e95176cd7bea0633287718a1b115eb3a8acab6fb58541"} Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.018809 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" event={"ID":"99a43674-e3dd-46c8-8fe7-b527112b3ff1","Type":"ContainerStarted","Data":"bbc401f0dda8f9d9853dd1fdefeb37976c6560c3b723ec6628b225f1fb7275aa"} Nov 25 19:49:16 crc kubenswrapper[4775]: E1125 19:49:16.021326 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" podUID="99a43674-e3dd-46c8-8fe7-b527112b3ff1" Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.021509 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8" event={"ID":"fb22768d-951e-4a69-bba6-8728e80e2935","Type":"ContainerStarted","Data":"fb6f8948b64e24169cf387cfb453198e4356f7598e823ae559eb2b32c1071f55"} Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.024704 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh" event={"ID":"c368de49-6c69-4140-a8c2-21c7afc13031","Type":"ContainerStarted","Data":"a3cb3a92073003d41090c107c7662706194dc8626f69da4249521684dd5cf7dd"} Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.028540 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6" event={"ID":"d8f444e1-3e73-4daa-a5f0-4fe2236a691b","Type":"ContainerStarted","Data":"6bf7087db68a3ecce069526cde30e8600bfbe367778eb368cea9bbf20347945a"} Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.030678 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" event={"ID":"08376459-180b-411f-9c74-c918980541f6","Type":"ContainerStarted","Data":"eb5fe64d77064dc4e81bbeb21132bafe0940b2dfb60a6cb2be4a966c562bfa02"} Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.033286 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k" event={"ID":"d9838469-3633-4b7d-88dc-0a6fd8c272ce","Type":"ContainerStarted","Data":"590e91cb9b49a4a375e7675be034447d8ddc815d973e6e155db6957b34d02590"} Nov 25 19:49:16 crc kubenswrapper[4775]: E1125 19:49:16.034628 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9413ed1bc2ae1a6bd28c59b1c7f7e91e1638de7b2a7d4729ed3fa2135182465d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" podUID="08376459-180b-411f-9c74-c918980541f6" Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.035896 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" event={"ID":"74ce2e86-cedf-4014-8d4c-8c126d58e7c9","Type":"ContainerStarted","Data":"9546dd1c4fd1706de09e6f9aa4b1c3c15cc14bdb0d511f95f78ce85cb948a91e"} Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.044925 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh" event={"ID":"d01f394d-062f-4736-a7fa-abe501a5b2d9","Type":"ContainerStarted","Data":"d23fc2bc880a2eec743ce0fb7425f029c396328e04a99055a205595182a30357"} Nov 25 19:49:16 crc kubenswrapper[4775]: E1125 19:49:16.043803 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" podUID="74ce2e86-cedf-4014-8d4c-8c126d58e7c9" Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.046981 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r" event={"ID":"9a436d5c-4f54-479c-846f-11e5d66d91fa","Type":"ContainerStarted","Data":"bbd0d8a7ac15b762e1b2947cfa5dd412a34be68ffa94ade346268564e6627712"} Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.048100 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hg4df" event={"ID":"592eda0a-f963-48bf-9902-3e52795051e3","Type":"ContainerStarted","Data":"fe1e8bf8e636e9b811e74935706b1ee7bb3061ee961e50a1f109714e1d9572a4"} Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.051383 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" event={"ID":"6910455f-354f-4f91-8333-5cb54be87db6","Type":"ContainerStarted","Data":"6d0a87de9da0f5658b21f35a51960e416b337e8d8e4d0ba232b02525f4eb2cee"} Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.053147 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459" event={"ID":"a778d0b3-0440-4c61-8a61-59524e36835e","Type":"ContainerStarted","Data":"e0d15e89dd4184b0ee3529b57cab9fac8b2866ab83e8e559dd666551d7a632da"} Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.059258 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" event={"ID":"8af71b48-ef6a-4e7f-8d32-e627f46a93ff","Type":"ContainerStarted","Data":"d9cbdb868775eedd4be1a5eb004461e8b26d3442b6696c1d09eb38da61334f89"} Nov 25 19:49:16 crc kubenswrapper[4775]: E1125 19:49:16.066521 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" podUID="8af71b48-ef6a-4e7f-8d32-e627f46a93ff" Nov 25 19:49:16 crc kubenswrapper[4775]: I1125 19:49:16.066690 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg" event={"ID":"88abb3bd-eb47-4185-a1a9-4f300ed99167","Type":"ContainerStarted","Data":"d23d6ff463127e6bec9550d8245837fa2d999007d39b96b118a53be301f4fe12"} Nov 25 19:49:17 crc kubenswrapper[4775]: I1125 19:49:17.062788 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:17 crc kubenswrapper[4775]: I1125 19:49:17.064147 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:17 crc kubenswrapper[4775]: E1125 19:49:17.107756 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" podUID="99a43674-e3dd-46c8-8fe7-b527112b3ff1" Nov 25 19:49:17 crc kubenswrapper[4775]: E1125 19:49:17.107835 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" podUID="74ce2e86-cedf-4014-8d4c-8c126d58e7c9" Nov 25 19:49:17 crc kubenswrapper[4775]: E1125 19:49:17.107884 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" podUID="8af71b48-ef6a-4e7f-8d32-e627f46a93ff" Nov 25 19:49:17 crc kubenswrapper[4775]: E1125 19:49:17.107928 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9413ed1bc2ae1a6bd28c59b1c7f7e91e1638de7b2a7d4729ed3fa2135182465d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" podUID="08376459-180b-411f-9c74-c918980541f6" Nov 25 19:49:17 crc kubenswrapper[4775]: E1125 19:49:17.112941 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" podUID="043fa652-c214-4428-877b-723905f53acb" Nov 25 19:49:17 crc kubenswrapper[4775]: I1125 19:49:17.151426 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:17 crc kubenswrapper[4775]: I1125 19:49:17.520847 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert\") pod \"infra-operator-controller-manager-57548d458d-tcv4j\" (UID: \"256bc456-e90c-4c18-8531-9d0470473b55\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:17 crc kubenswrapper[4775]: E1125 19:49:17.522748 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 19:49:17 crc kubenswrapper[4775]: E1125 19:49:17.522812 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert podName:256bc456-e90c-4c18-8531-9d0470473b55 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:21.522797752 +0000 UTC m=+943.439160118 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert") pod "infra-operator-controller-manager-57548d458d-tcv4j" (UID: "256bc456-e90c-4c18-8531-9d0470473b55") : secret "infra-operator-webhook-server-cert" not found Nov 25 19:49:17 crc kubenswrapper[4775]: I1125 19:49:17.724019 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv\" (UID: \"7b75a9f0-bd88-4e53-973a-0ce97e41cec8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:17 crc kubenswrapper[4775]: E1125 19:49:17.724222 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 19:49:17 crc kubenswrapper[4775]: E1125 19:49:17.724293 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert podName:7b75a9f0-bd88-4e53-973a-0ce97e41cec8 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:21.724276327 +0000 UTC m=+943.640638683 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" (UID: "7b75a9f0-bd88-4e53-973a-0ce97e41cec8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 19:49:18 crc kubenswrapper[4775]: I1125 19:49:18.030238 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:18 crc kubenswrapper[4775]: I1125 19:49:18.030334 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:18 crc kubenswrapper[4775]: E1125 19:49:18.030470 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 19:49:18 crc kubenswrapper[4775]: E1125 19:49:18.030517 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs podName:b5f009d3-7b77-49c1-b5f1-b8219b31ed47 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:22.030504071 +0000 UTC m=+943.946866437 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs") pod "openstack-operator-controller-manager-7f84d9dcfc-rrwgl" (UID: "b5f009d3-7b77-49c1-b5f1-b8219b31ed47") : secret "metrics-server-cert" not found Nov 25 19:49:18 crc kubenswrapper[4775]: E1125 19:49:18.030642 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 19:49:18 crc kubenswrapper[4775]: E1125 19:49:18.030744 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs podName:b5f009d3-7b77-49c1-b5f1-b8219b31ed47 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:22.030722127 +0000 UTC m=+943.947084493 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs") pod "openstack-operator-controller-manager-7f84d9dcfc-rrwgl" (UID: "b5f009d3-7b77-49c1-b5f1-b8219b31ed47") : secret "webhook-server-cert" not found Nov 25 19:49:18 crc kubenswrapper[4775]: I1125 19:49:18.141695 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:18 crc kubenswrapper[4775]: I1125 19:49:18.185808 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5cwf"] Nov 25 19:49:20 crc kubenswrapper[4775]: I1125 19:49:20.117677 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b5cwf" podUID="83480cff-9fa1-4812-90a4-0bedd4ba5637" containerName="registry-server" containerID="cri-o://63f59c2b40345dcfff315bc825cb9b7177bd9528cf791444dcf2cdfaffe2ee50" gracePeriod=2 Nov 25 19:49:21 crc kubenswrapper[4775]: I1125 19:49:21.125570 4775 generic.go:334] "Generic (PLEG): container finished" podID="83480cff-9fa1-4812-90a4-0bedd4ba5637" containerID="63f59c2b40345dcfff315bc825cb9b7177bd9528cf791444dcf2cdfaffe2ee50" exitCode=0 Nov 25 19:49:21 crc kubenswrapper[4775]: I1125 19:49:21.125618 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5cwf" event={"ID":"83480cff-9fa1-4812-90a4-0bedd4ba5637","Type":"ContainerDied","Data":"63f59c2b40345dcfff315bc825cb9b7177bd9528cf791444dcf2cdfaffe2ee50"} Nov 25 19:49:21 crc kubenswrapper[4775]: I1125 19:49:21.591615 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert\") pod \"infra-operator-controller-manager-57548d458d-tcv4j\" (UID: \"256bc456-e90c-4c18-8531-9d0470473b55\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:21 crc kubenswrapper[4775]: E1125 19:49:21.591878 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 19:49:21 crc kubenswrapper[4775]: E1125 19:49:21.592191 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert podName:256bc456-e90c-4c18-8531-9d0470473b55 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:29.592164046 +0000 UTC m=+951.508526422 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert") pod "infra-operator-controller-manager-57548d458d-tcv4j" (UID: "256bc456-e90c-4c18-8531-9d0470473b55") : secret "infra-operator-webhook-server-cert" not found Nov 25 19:49:21 crc kubenswrapper[4775]: I1125 19:49:21.794357 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv\" (UID: \"7b75a9f0-bd88-4e53-973a-0ce97e41cec8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:21 crc kubenswrapper[4775]: E1125 19:49:21.794618 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 19:49:21 crc kubenswrapper[4775]: E1125 19:49:21.794758 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert podName:7b75a9f0-bd88-4e53-973a-0ce97e41cec8 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:29.79473071 +0000 UTC m=+951.711093096 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" (UID: "7b75a9f0-bd88-4e53-973a-0ce97e41cec8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 19:49:22 crc kubenswrapper[4775]: I1125 19:49:22.100509 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:22 crc kubenswrapper[4775]: E1125 19:49:22.100636 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 19:49:22 crc kubenswrapper[4775]: E1125 19:49:22.100779 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs podName:b5f009d3-7b77-49c1-b5f1-b8219b31ed47 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:30.10076512 +0000 UTC m=+952.017127486 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs") pod "openstack-operator-controller-manager-7f84d9dcfc-rrwgl" (UID: "b5f009d3-7b77-49c1-b5f1-b8219b31ed47") : secret "metrics-server-cert" not found Nov 25 19:49:22 crc kubenswrapper[4775]: I1125 19:49:22.101138 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:22 crc kubenswrapper[4775]: E1125 19:49:22.101223 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 19:49:22 crc kubenswrapper[4775]: E1125 19:49:22.101253 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs podName:b5f009d3-7b77-49c1-b5f1-b8219b31ed47 nodeName:}" failed. No retries permitted until 2025-11-25 19:49:30.101246043 +0000 UTC m=+952.017608409 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs") pod "openstack-operator-controller-manager-7f84d9dcfc-rrwgl" (UID: "b5f009d3-7b77-49c1-b5f1-b8219b31ed47") : secret "webhook-server-cert" not found Nov 25 19:49:27 crc kubenswrapper[4775]: E1125 19:49:27.062213 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63f59c2b40345dcfff315bc825cb9b7177bd9528cf791444dcf2cdfaffe2ee50 is running failed: container process not found" containerID="63f59c2b40345dcfff315bc825cb9b7177bd9528cf791444dcf2cdfaffe2ee50" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 19:49:27 crc kubenswrapper[4775]: E1125 19:49:27.062796 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63f59c2b40345dcfff315bc825cb9b7177bd9528cf791444dcf2cdfaffe2ee50 is running failed: container process not found" containerID="63f59c2b40345dcfff315bc825cb9b7177bd9528cf791444dcf2cdfaffe2ee50" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 19:49:27 crc kubenswrapper[4775]: E1125 19:49:27.063140 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63f59c2b40345dcfff315bc825cb9b7177bd9528cf791444dcf2cdfaffe2ee50 is running failed: container process not found" containerID="63f59c2b40345dcfff315bc825cb9b7177bd9528cf791444dcf2cdfaffe2ee50" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 19:49:27 crc kubenswrapper[4775]: E1125 19:49:27.063172 4775 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63f59c2b40345dcfff315bc825cb9b7177bd9528cf791444dcf2cdfaffe2ee50 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-b5cwf" podUID="83480cff-9fa1-4812-90a4-0bedd4ba5637" containerName="registry-server" Nov 25 19:49:29 crc kubenswrapper[4775]: I1125 19:49:29.622099 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert\") pod \"infra-operator-controller-manager-57548d458d-tcv4j\" (UID: \"256bc456-e90c-4c18-8531-9d0470473b55\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:29 crc kubenswrapper[4775]: I1125 19:49:29.630594 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/256bc456-e90c-4c18-8531-9d0470473b55-cert\") pod \"infra-operator-controller-manager-57548d458d-tcv4j\" (UID: \"256bc456-e90c-4c18-8531-9d0470473b55\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:29 crc kubenswrapper[4775]: I1125 19:49:29.821504 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:29 crc kubenswrapper[4775]: I1125 19:49:29.825066 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv\" (UID: \"7b75a9f0-bd88-4e53-973a-0ce97e41cec8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:29 crc kubenswrapper[4775]: I1125 19:49:29.831098 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b75a9f0-bd88-4e53-973a-0ce97e41cec8-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv\" (UID: \"7b75a9f0-bd88-4e53-973a-0ce97e41cec8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:29 crc kubenswrapper[4775]: I1125 19:49:29.861206 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:29 crc kubenswrapper[4775]: I1125 19:49:29.950447 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pl4vl"] Nov 25 19:49:29 crc kubenswrapper[4775]: I1125 19:49:29.952372 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:29 crc kubenswrapper[4775]: I1125 19:49:29.962818 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pl4vl"] Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.130399 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.130450 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/648a6737-d8ad-461b-a889-779a45c37ed7-catalog-content\") pod \"certified-operators-pl4vl\" (UID: \"648a6737-d8ad-461b-a889-779a45c37ed7\") " pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.130484 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhfpq\" (UniqueName: \"kubernetes.io/projected/648a6737-d8ad-461b-a889-779a45c37ed7-kube-api-access-qhfpq\") pod \"certified-operators-pl4vl\" (UID: \"648a6737-d8ad-461b-a889-779a45c37ed7\") " pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.130523 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.130548 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/648a6737-d8ad-461b-a889-779a45c37ed7-utilities\") pod \"certified-operators-pl4vl\" (UID: \"648a6737-d8ad-461b-a889-779a45c37ed7\") " pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.146307 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-webhook-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.147853 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5f009d3-7b77-49c1-b5f1-b8219b31ed47-metrics-certs\") pod \"openstack-operator-controller-manager-7f84d9dcfc-rrwgl\" (UID: \"b5f009d3-7b77-49c1-b5f1-b8219b31ed47\") " pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.163570 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.232023 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/648a6737-d8ad-461b-a889-779a45c37ed7-utilities\") pod \"certified-operators-pl4vl\" (UID: \"648a6737-d8ad-461b-a889-779a45c37ed7\") " pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.232182 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/648a6737-d8ad-461b-a889-779a45c37ed7-catalog-content\") pod \"certified-operators-pl4vl\" (UID: \"648a6737-d8ad-461b-a889-779a45c37ed7\") " pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.232238 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhfpq\" (UniqueName: \"kubernetes.io/projected/648a6737-d8ad-461b-a889-779a45c37ed7-kube-api-access-qhfpq\") pod \"certified-operators-pl4vl\" (UID: \"648a6737-d8ad-461b-a889-779a45c37ed7\") " pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.232616 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/648a6737-d8ad-461b-a889-779a45c37ed7-utilities\") pod \"certified-operators-pl4vl\" (UID: \"648a6737-d8ad-461b-a889-779a45c37ed7\") " pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.232722 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/648a6737-d8ad-461b-a889-779a45c37ed7-catalog-content\") pod \"certified-operators-pl4vl\" (UID: \"648a6737-d8ad-461b-a889-779a45c37ed7\") " pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.252783 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhfpq\" (UniqueName: \"kubernetes.io/projected/648a6737-d8ad-461b-a889-779a45c37ed7-kube-api-access-qhfpq\") pod \"certified-operators-pl4vl\" (UID: \"648a6737-d8ad-461b-a889-779a45c37ed7\") " pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:30 crc kubenswrapper[4775]: I1125 19:49:30.285306 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:30 crc kubenswrapper[4775]: E1125 19:49:30.649547 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4" Nov 25 19:49:30 crc kubenswrapper[4775]: E1125 19:49:30.649773 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q2rdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d77b94747-w2rwh_openstack-operators(d01f394d-062f-4736-a7fa-abe501a5b2d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 19:49:31 crc kubenswrapper[4775]: E1125 19:49:31.171024 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:89910bc3ecceb7590d3207ac294eb7354de358cf39ef03c72323b26c598e50e6" Nov 25 19:49:31 crc kubenswrapper[4775]: E1125 19:49:31.171511 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:89910bc3ecceb7590d3207ac294eb7354de358cf39ef03c72323b26c598e50e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fhk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5d499bf58b-sd2lc_openstack-operators(e7a4f97f-5b6f-4347-b156-d96e1be21183): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 19:49:31 crc kubenswrapper[4775]: E1125 19:49:31.815069 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:d65dbfc956e9cf376f3c48fc3a0942cb7306b5164f898c40d1efca106df81db7" Nov 25 19:49:31 crc kubenswrapper[4775]: E1125 19:49:31.815258 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:d65dbfc956e9cf376f3c48fc3a0942cb7306b5164f898c40d1efca106df81db7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nvbft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-67cb4dc6d4-86lv6_openstack-operators(d8f444e1-3e73-4daa-a5f0-4fe2236a691b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 19:49:32 crc kubenswrapper[4775]: E1125 19:49:32.360065 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:ec4e5c911c1d0f1ea211a04b251a9d2e95b69d141c1caf07a0381693b2d6368b" Nov 25 19:49:32 crc kubenswrapper[4775]: E1125 19:49:32.362487 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:ec4e5c911c1d0f1ea211a04b251a9d2e95b69d141c1caf07a0381693b2d6368b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v49zr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-955677c94-vn52j_openstack-operators(f1338e2e-e4e6-4c4b-a410-72e2d1acab0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 19:49:32 crc kubenswrapper[4775]: I1125 19:49:32.412090 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:32 crc kubenswrapper[4775]: I1125 19:49:32.571514 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83480cff-9fa1-4812-90a4-0bedd4ba5637-catalog-content\") pod \"83480cff-9fa1-4812-90a4-0bedd4ba5637\" (UID: \"83480cff-9fa1-4812-90a4-0bedd4ba5637\") " Nov 25 19:49:32 crc kubenswrapper[4775]: I1125 19:49:32.571562 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83480cff-9fa1-4812-90a4-0bedd4ba5637-utilities\") pod \"83480cff-9fa1-4812-90a4-0bedd4ba5637\" (UID: \"83480cff-9fa1-4812-90a4-0bedd4ba5637\") " Nov 25 19:49:32 crc kubenswrapper[4775]: I1125 19:49:32.571603 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92ckf\" (UniqueName: \"kubernetes.io/projected/83480cff-9fa1-4812-90a4-0bedd4ba5637-kube-api-access-92ckf\") pod \"83480cff-9fa1-4812-90a4-0bedd4ba5637\" (UID: \"83480cff-9fa1-4812-90a4-0bedd4ba5637\") " Nov 25 19:49:32 crc kubenswrapper[4775]: I1125 19:49:32.573363 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83480cff-9fa1-4812-90a4-0bedd4ba5637-utilities" (OuterVolumeSpecName: "utilities") pod "83480cff-9fa1-4812-90a4-0bedd4ba5637" (UID: "83480cff-9fa1-4812-90a4-0bedd4ba5637"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:49:32 crc kubenswrapper[4775]: I1125 19:49:32.577106 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83480cff-9fa1-4812-90a4-0bedd4ba5637-kube-api-access-92ckf" (OuterVolumeSpecName: "kube-api-access-92ckf") pod "83480cff-9fa1-4812-90a4-0bedd4ba5637" (UID: "83480cff-9fa1-4812-90a4-0bedd4ba5637"). InnerVolumeSpecName "kube-api-access-92ckf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:49:32 crc kubenswrapper[4775]: I1125 19:49:32.646994 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83480cff-9fa1-4812-90a4-0bedd4ba5637-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83480cff-9fa1-4812-90a4-0bedd4ba5637" (UID: "83480cff-9fa1-4812-90a4-0bedd4ba5637"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:49:32 crc kubenswrapper[4775]: I1125 19:49:32.673276 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83480cff-9fa1-4812-90a4-0bedd4ba5637-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:49:32 crc kubenswrapper[4775]: I1125 19:49:32.673313 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83480cff-9fa1-4812-90a4-0bedd4ba5637-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:49:32 crc kubenswrapper[4775]: I1125 19:49:32.673323 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92ckf\" (UniqueName: \"kubernetes.io/projected/83480cff-9fa1-4812-90a4-0bedd4ba5637-kube-api-access-92ckf\") on node \"crc\" DevicePath \"\"" Nov 25 19:49:33 crc kubenswrapper[4775]: I1125 19:49:33.220791 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5cwf" event={"ID":"83480cff-9fa1-4812-90a4-0bedd4ba5637","Type":"ContainerDied","Data":"ea844ed3f29cf0f7ffcbcb082f2aaa3bd54a78f88f27b8e7bf5b0010449b26e1"} Nov 25 19:49:33 crc kubenswrapper[4775]: I1125 19:49:33.220871 4775 scope.go:117] "RemoveContainer" containerID="63f59c2b40345dcfff315bc825cb9b7177bd9528cf791444dcf2cdfaffe2ee50" Nov 25 19:49:33 crc kubenswrapper[4775]: I1125 19:49:33.220969 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5cwf" Nov 25 19:49:33 crc kubenswrapper[4775]: I1125 19:49:33.247671 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5cwf"] Nov 25 19:49:33 crc kubenswrapper[4775]: I1125 19:49:33.253368 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b5cwf"] Nov 25 19:49:34 crc kubenswrapper[4775]: E1125 19:49:34.137621 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:3dbf9fd9dce75f1fb250ee4c4097ad77d2f34110b61d85e37abd9c472e022e6c" Nov 25 19:49:34 crc kubenswrapper[4775]: E1125 19:49:34.137848 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:3dbf9fd9dce75f1fb250ee4c4097ad77d2f34110b61d85e37abd9c472e022e6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k8888,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7b64f4fb85-w8459_openstack-operators(a778d0b3-0440-4c61-8a61-59524e36835e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 19:49:34 crc kubenswrapper[4775]: E1125 19:49:34.774571 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 25 19:49:34 crc kubenswrapper[4775]: E1125 19:49:34.775098 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pmccf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-hg4df_openstack-operators(592eda0a-f963-48bf-9902-3e52795051e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 19:49:34 crc kubenswrapper[4775]: E1125 19:49:34.776401 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hg4df" podUID="592eda0a-f963-48bf-9902-3e52795051e3" Nov 25 19:49:34 crc kubenswrapper[4775]: I1125 19:49:34.855843 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83480cff-9fa1-4812-90a4-0bedd4ba5637" path="/var/lib/kubelet/pods/83480cff-9fa1-4812-90a4-0bedd4ba5637/volumes" Nov 25 19:49:35 crc kubenswrapper[4775]: E1125 19:49:35.236084 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hg4df" podUID="592eda0a-f963-48bf-9902-3e52795051e3" Nov 25 19:49:37 crc kubenswrapper[4775]: E1125 19:49:37.955968 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711" Nov 25 19:49:37 crc kubenswrapper[4775]: E1125 19:49:37.956507 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6sj4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7b4567c7cf-hjdwf_openstack-operators(6910455f-354f-4f91-8333-5cb54be87db6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 19:49:38 crc kubenswrapper[4775]: I1125 19:49:38.628461 4775 scope.go:117] "RemoveContainer" containerID="3f53808e0badf80e1872a054cebe59d104fe063a191f3d3b8ceb860847c9ab66" Nov 25 19:49:39 crc kubenswrapper[4775]: I1125 19:49:39.084905 4775 scope.go:117] "RemoveContainer" containerID="2e738db3da0286215a0f49a87e6926035418e53ea885d99dbfc68e2ee63034c1" Nov 25 19:49:39 crc kubenswrapper[4775]: I1125 19:49:39.329832 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pl4vl"] Nov 25 19:49:39 crc kubenswrapper[4775]: I1125 19:49:39.366541 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl"] Nov 25 19:49:39 crc kubenswrapper[4775]: I1125 19:49:39.381523 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv"] Nov 25 19:49:39 crc kubenswrapper[4775]: I1125 19:49:39.428958 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j"] Nov 25 19:49:39 crc kubenswrapper[4775]: W1125 19:49:39.839869 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod256bc456_e90c_4c18_8531_9d0470473b55.slice/crio-a12a0ec34c789f9c00bc8c987c765bb42f5a99849e4fdcb6e71ea9f10730078a WatchSource:0}: Error finding container a12a0ec34c789f9c00bc8c987c765bb42f5a99849e4fdcb6e71ea9f10730078a: Status 404 returned error can't find the container with id a12a0ec34c789f9c00bc8c987c765bb42f5a99849e4fdcb6e71ea9f10730078a Nov 25 19:49:39 crc kubenswrapper[4775]: W1125 19:49:39.840151 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b75a9f0_bd88_4e53_973a_0ce97e41cec8.slice/crio-c2be090e3a4da2404594ec571be88dc734e6d9933be65562762d49a96b83ef6a WatchSource:0}: Error finding container c2be090e3a4da2404594ec571be88dc734e6d9933be65562762d49a96b83ef6a: Status 404 returned error can't find the container with id c2be090e3a4da2404594ec571be88dc734e6d9933be65562762d49a96b83ef6a Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.288101 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" event={"ID":"7b75a9f0-bd88-4e53-973a-0ce97e41cec8","Type":"ContainerStarted","Data":"c2be090e3a4da2404594ec571be88dc734e6d9933be65562762d49a96b83ef6a"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.290576 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl4vl" event={"ID":"648a6737-d8ad-461b-a889-779a45c37ed7","Type":"ContainerStarted","Data":"06a0e7c9065a2b8220f5f7a8f5c4a6b85019e2f2c29a76bc9073ba07ab2da8bd"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.294757 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k" event={"ID":"d9838469-3633-4b7d-88dc-0a6fd8c272ce","Type":"ContainerStarted","Data":"c3c53913e36be2a88e6c9829dab6c15776d3e48ecc0ceaef639606b3bc00d48e"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.296775 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8" event={"ID":"fb22768d-951e-4a69-bba6-8728e80e2935","Type":"ContainerStarted","Data":"34ec3f431193b9e39987bee8a31328e8ea5fcb3e6021ac9288ab9d5b04f8438b"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.298893 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj" event={"ID":"360afa93-07ee-47ad-beb7-cd45b9cc9bef","Type":"ContainerStarted","Data":"38ea3a50ff21e9f1c4be1c9a4e6935dfe5ff26428d764ae15e62a5166aa34b30"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.300916 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh" event={"ID":"c368de49-6c69-4140-a8c2-21c7afc13031","Type":"ContainerStarted","Data":"60ec08b320bd43d67805edfe913b11ea3d233297c4c4f73025061eb2ae35215b"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.317163 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4" event={"ID":"fdbde397-fc85-41aa-915f-3b8d77553adc","Type":"ContainerStarted","Data":"fdf6cdfb98a978798a45dff5fd7f3f11809e7efae9e115270503a3b50aba527e"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.324504 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" event={"ID":"8af71b48-ef6a-4e7f-8d32-e627f46a93ff","Type":"ContainerStarted","Data":"5c24deb0c72d23cd05df712e5803027991a9b6c5f329bb6d723beb077920d9c6"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.338501 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" event={"ID":"74ce2e86-cedf-4014-8d4c-8c126d58e7c9","Type":"ContainerStarted","Data":"a6f667cfb89ea541810bd95f18a77015c60d32e00361fca78feb21c4736e26fe"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.341275 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" event={"ID":"043fa652-c214-4428-877b-723905f53acb","Type":"ContainerStarted","Data":"43bd6cf7e7707f7400db7245aaf689b7b185378be8711b7ec42791a1382d799c"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.353684 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f" event={"ID":"ca39bca1-68fa-4d64-a929-1b3d013bb679","Type":"ContainerStarted","Data":"ca04d39ca9b6849495049ae938d591daa7264a85ee2ddc211b461dc1d7662d22"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.359154 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" event={"ID":"b5f009d3-7b77-49c1-b5f1-b8219b31ed47","Type":"ContainerStarted","Data":"04edd2b77f8f5115a0869a333ea252202e5ceb0c94fc15dc9ebe38240f79d1b9"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.361917 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg" event={"ID":"88abb3bd-eb47-4185-a1a9-4f300ed99167","Type":"ContainerStarted","Data":"8fb8f20a32349c3a48a2997be45a7aaa1e01d267dcc06ad1b7c0351ca37ad86e"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.363419 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r" event={"ID":"9a436d5c-4f54-479c-846f-11e5d66d91fa","Type":"ContainerStarted","Data":"b395f6d44db9da28dd1a369539007e18ca471813b27a1220de505df5836164ab"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.369813 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" event={"ID":"99a43674-e3dd-46c8-8fe7-b527112b3ff1","Type":"ContainerStarted","Data":"9d5445bb489d78e5c55ec55618ee4836517dd07e4004c359cc043ed1752c2811"} Nov 25 19:49:40 crc kubenswrapper[4775]: I1125 19:49:40.371038 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" event={"ID":"256bc456-e90c-4c18-8531-9d0470473b55","Type":"ContainerStarted","Data":"a12a0ec34c789f9c00bc8c987c765bb42f5a99849e4fdcb6e71ea9f10730078a"} Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.070357 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.070695 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.070759 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.071430 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b6dd9da01186a3ce4b866b2112d5b02fb5c358a2952aa59bf01efd8cd71d7aa"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.071483 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://7b6dd9da01186a3ce4b866b2112d5b02fb5c358a2952aa59bf01efd8cd71d7aa" gracePeriod=600 Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.378946 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" event={"ID":"08376459-180b-411f-9c74-c918980541f6","Type":"ContainerStarted","Data":"5a8a0553f8d6ae8a366a318b816ae66a26535eecfbfcf585b2602f3fd9d732b3"} Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.384005 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="7b6dd9da01186a3ce4b866b2112d5b02fb5c358a2952aa59bf01efd8cd71d7aa" exitCode=0 Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.384052 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"7b6dd9da01186a3ce4b866b2112d5b02fb5c358a2952aa59bf01efd8cd71d7aa"} Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.384078 4775 scope.go:117] "RemoveContainer" containerID="f9e60c7320dcbc3b2c5ac1396fe8089095784ebc9e95a14db7f39bea21a7ea59" Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.386214 4775 generic.go:334] "Generic (PLEG): container finished" podID="648a6737-d8ad-461b-a889-779a45c37ed7" containerID="c881b4dd25a497f6933e7431d257183481487eddd449885f888661614542bd32" exitCode=0 Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.386264 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl4vl" event={"ID":"648a6737-d8ad-461b-a889-779a45c37ed7","Type":"ContainerDied","Data":"c881b4dd25a497f6933e7431d257183481487eddd449885f888661614542bd32"} Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.389474 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" event={"ID":"b5f009d3-7b77-49c1-b5f1-b8219b31ed47","Type":"ContainerStarted","Data":"f8955fcdf4f224d86c2e5e42b82d3dc1a6f937593c7e1ddf9350c4f28d5276f3"} Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.389606 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:41 crc kubenswrapper[4775]: I1125 19:49:41.438259 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" podStartSLOduration=27.438243516 podStartE2EDuration="27.438243516s" podCreationTimestamp="2025-11-25 19:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:49:41.437750763 +0000 UTC m=+963.354113119" watchObservedRunningTime="2025-11-25 19:49:41.438243516 +0000 UTC m=+963.354605882" Nov 25 19:49:43 crc kubenswrapper[4775]: E1125 19:49:43.748289 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459" podUID="a778d0b3-0440-4c61-8a61-59524e36835e" Nov 25 19:49:44 crc kubenswrapper[4775]: E1125 19:49:44.005742 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc" podUID="e7a4f97f-5b6f-4347-b156-d96e1be21183" Nov 25 19:49:44 crc kubenswrapper[4775]: E1125 19:49:44.020766 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6" podUID="d8f444e1-3e73-4daa-a5f0-4fe2236a691b" Nov 25 19:49:44 crc kubenswrapper[4775]: E1125 19:49:44.069062 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh" podUID="d01f394d-062f-4736-a7fa-abe501a5b2d9" Nov 25 19:49:44 crc kubenswrapper[4775]: E1125 19:49:44.069755 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" podUID="6910455f-354f-4f91-8333-5cb54be87db6" Nov 25 19:49:44 crc kubenswrapper[4775]: E1125 19:49:44.412059 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-955677c94-vn52j" podUID="f1338e2e-e4e6-4c4b-a410-72e2d1acab0d" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.464549 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r" event={"ID":"9a436d5c-4f54-479c-846f-11e5d66d91fa","Type":"ContainerStarted","Data":"306c881b052bbb9e58f2b6c30c64370445eb33b0e0ecc69da21bd7d89aef7e61"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.465021 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.469977 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.481904 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" event={"ID":"08376459-180b-411f-9c74-c918980541f6","Type":"ContainerStarted","Data":"d65b85998a35e7c8d7e5fe02e2409dd85389ab51271521d19391aaab476de321"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.482460 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.491779 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-5nm9r" podStartSLOduration=3.877284627 podStartE2EDuration="31.491766259s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.712821172 +0000 UTC m=+937.629183548" lastFinishedPulling="2025-11-25 19:49:43.327302814 +0000 UTC m=+965.243665180" observedRunningTime="2025-11-25 19:49:44.488692047 +0000 UTC m=+966.405054413" watchObservedRunningTime="2025-11-25 19:49:44.491766259 +0000 UTC m=+966.408128625" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.506288 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k" event={"ID":"d9838469-3633-4b7d-88dc-0a6fd8c272ce","Type":"ContainerStarted","Data":"23089a38f5eefcc7f730da34ba63cd321cced138ceeaeb0e9ad03aa7cdd83988"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.506586 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.509902 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.514100 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" event={"ID":"256bc456-e90c-4c18-8531-9d0470473b55","Type":"ContainerStarted","Data":"cf9c0ab4d94b680bb2abb1267fb7f1f6ea35f4917e773acd9b81efd8ea013979"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.514134 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" event={"ID":"256bc456-e90c-4c18-8531-9d0470473b55","Type":"ContainerStarted","Data":"4b7e3ddeb56098b261e2dadbbe23fb923a10fbc0dbbaa31f5596f38c3eefeecf"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.514436 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.520638 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" podStartSLOduration=3.946311003 podStartE2EDuration="31.5206191s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.762248447 +0000 UTC m=+937.678610813" lastFinishedPulling="2025-11-25 19:49:43.336556504 +0000 UTC m=+965.252918910" observedRunningTime="2025-11-25 19:49:44.517734322 +0000 UTC m=+966.434096688" watchObservedRunningTime="2025-11-25 19:49:44.5206191 +0000 UTC m=+966.436981466" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.529958 4775 generic.go:334] "Generic (PLEG): container finished" podID="648a6737-d8ad-461b-a889-779a45c37ed7" containerID="817e362328351edfb73fd95f9a562310ac4c5f0a01e4c8e75ce17bc262349a39" exitCode=0 Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.530042 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl4vl" event={"ID":"648a6737-d8ad-461b-a889-779a45c37ed7","Type":"ContainerDied","Data":"817e362328351edfb73fd95f9a562310ac4c5f0a01e4c8e75ce17bc262349a39"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.539981 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8" event={"ID":"fb22768d-951e-4a69-bba6-8728e80e2935","Type":"ContainerStarted","Data":"e0134a72526a158a1e85b05dea3e99f913249d9d428454d710ad9fa991f8f562"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.541933 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.547852 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.563891 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh" event={"ID":"d01f394d-062f-4736-a7fa-abe501a5b2d9","Type":"ContainerStarted","Data":"cc25c87fcd9e33e664678739efe229a93c6e5f8f5875dbb82ae6b757d3164f9a"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.585616 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6" event={"ID":"d8f444e1-3e73-4daa-a5f0-4fe2236a691b","Type":"ContainerStarted","Data":"0dde661c7eb2444f33143fd7781b6b0578dd0d69a67a652807768d54bf07920b"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.587402 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" podStartSLOduration=28.246562596 podStartE2EDuration="31.587289371s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:39.877694027 +0000 UTC m=+961.794056393" lastFinishedPulling="2025-11-25 19:49:43.218420792 +0000 UTC m=+965.134783168" observedRunningTime="2025-11-25 19:49:44.584209707 +0000 UTC m=+966.500572073" watchObservedRunningTime="2025-11-25 19:49:44.587289371 +0000 UTC m=+966.503651737" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.611973 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" event={"ID":"7b75a9f0-bd88-4e53-973a-0ce97e41cec8","Type":"ContainerStarted","Data":"c13c79515a418c9b9f893f75c36c15cd643075d608a5a4c398433b102ad10c79"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.614050 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-mh8t8" podStartSLOduration=4.020050356 podStartE2EDuration="31.614038934s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.689130152 +0000 UTC m=+937.605492518" lastFinishedPulling="2025-11-25 19:49:43.28311873 +0000 UTC m=+965.199481096" observedRunningTime="2025-11-25 19:49:44.61205008 +0000 UTC m=+966.528412436" watchObservedRunningTime="2025-11-25 19:49:44.614038934 +0000 UTC m=+966.530401300" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.648136 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4" event={"ID":"fdbde397-fc85-41aa-915f-3b8d77553adc","Type":"ContainerStarted","Data":"566a90243a3cf08a0b700600eca2c265f6d8eac9f9f17ed0ec294ac6edf166ad"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.650671 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.654798 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.676424 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459" event={"ID":"a778d0b3-0440-4c61-8a61-59524e36835e","Type":"ContainerStarted","Data":"f85307b7d337b83de9f695f79fb5040a4ad6358886c33d968e8c843f85c36121"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.677146 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-p6c9k" podStartSLOduration=4.061092756 podStartE2EDuration="31.677127419s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.718070714 +0000 UTC m=+937.634433080" lastFinishedPulling="2025-11-25 19:49:43.334105347 +0000 UTC m=+965.250467743" observedRunningTime="2025-11-25 19:49:44.675553406 +0000 UTC m=+966.591915772" watchObservedRunningTime="2025-11-25 19:49:44.677127419 +0000 UTC m=+966.593489785" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.696019 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" event={"ID":"99a43674-e3dd-46c8-8fe7-b527112b3ff1","Type":"ContainerStarted","Data":"15f26608e3f43b9b7314947c09d0ecebf821bda86b3878521092d86f317916e1"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.696660 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.698092 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f" event={"ID":"ca39bca1-68fa-4d64-a929-1b3d013bb679","Type":"ContainerStarted","Data":"b301576eba2b290078543232badc26cf7c3bc5ef0e1e28549b801d7c75e731d1"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.698791 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.728011 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.750277 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-vn52j" event={"ID":"f1338e2e-e4e6-4c4b-a410-72e2d1acab0d","Type":"ContainerStarted","Data":"18ab37125db31c3e7821fc75599849bb83514723786d42bc5aea8ec7ba140eb0"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.769865 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" event={"ID":"043fa652-c214-4428-877b-723905f53acb","Type":"ContainerStarted","Data":"86784374502fa4c23fabd3551047ef60f5825ca6ed35a614019d49867085d819"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.778300 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" event={"ID":"8af71b48-ef6a-4e7f-8d32-e627f46a93ff","Type":"ContainerStarted","Data":"ca41d0b5048e92d3983e7cdce6abe20217b8f41ff3fd3379f30032e77960db7a"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.778730 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.786811 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh" event={"ID":"c368de49-6c69-4140-a8c2-21c7afc13031","Type":"ContainerStarted","Data":"325460b31d90495cc0535ab3cfb0370c1a1886f4dd717c4cd34247eb32412a3d"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.787897 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.793700 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-snkf4" podStartSLOduration=4.156934103 podStartE2EDuration="31.793687988s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.678331719 +0000 UTC m=+937.594694085" lastFinishedPulling="2025-11-25 19:49:43.315085604 +0000 UTC m=+965.231447970" observedRunningTime="2025-11-25 19:49:44.793294888 +0000 UTC m=+966.709657254" watchObservedRunningTime="2025-11-25 19:49:44.793687988 +0000 UTC m=+966.710050344" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.798828 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.800337 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc" event={"ID":"e7a4f97f-5b6f-4347-b156-d96e1be21183","Type":"ContainerStarted","Data":"6f2a2938c9a9e329fb3390c14018c723334b1729381fa8bd6b18f92e5310338f"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.818275 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" event={"ID":"6910455f-354f-4f91-8333-5cb54be87db6","Type":"ContainerStarted","Data":"4f9faafd53ea422cead9739e88284b2de1653c09d50900fd7c018388a52d2ab9"} Nov 25 19:49:44 crc kubenswrapper[4775]: E1125 19:49:44.819427 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" podUID="6910455f-354f-4f91-8333-5cb54be87db6" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.820947 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"9d57ed892e1f28c6ded4ad19e2041e94c1c82f1ac3bd35631b061f8f7717302b"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.822440 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj" event={"ID":"360afa93-07ee-47ad-beb7-cd45b9cc9bef","Type":"ContainerStarted","Data":"4e36b6495f1e5f0d6e222c7b5d3b1390ebdcc31b97288faf72a56410427898d5"} Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.823132 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.833619 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mhvjh" podStartSLOduration=3.124900018 podStartE2EDuration="30.833602927s" podCreationTimestamp="2025-11-25 19:49:14 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.668406611 +0000 UTC m=+937.584768977" lastFinishedPulling="2025-11-25 19:49:43.37710952 +0000 UTC m=+965.293471886" observedRunningTime="2025-11-25 19:49:44.817616755 +0000 UTC m=+966.733979121" watchObservedRunningTime="2025-11-25 19:49:44.833602927 +0000 UTC m=+966.749965293" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.901326 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.929013 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-84dfd86bd6-8nk5f" podStartSLOduration=3.289567917 podStartE2EDuration="31.928987264s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:14.690131377 +0000 UTC m=+936.606493743" lastFinishedPulling="2025-11-25 19:49:43.329550724 +0000 UTC m=+965.245913090" observedRunningTime="2025-11-25 19:49:44.886923647 +0000 UTC m=+966.803286013" watchObservedRunningTime="2025-11-25 19:49:44.928987264 +0000 UTC m=+966.845349630" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.966414 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" podStartSLOduration=4.321633145 podStartE2EDuration="31.966393915s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.735687219 +0000 UTC m=+937.652049585" lastFinishedPulling="2025-11-25 19:49:43.380447989 +0000 UTC m=+965.296810355" observedRunningTime="2025-11-25 19:49:44.918557873 +0000 UTC m=+966.834920239" watchObservedRunningTime="2025-11-25 19:49:44.966393915 +0000 UTC m=+966.882756281" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.976097 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" podStartSLOduration=4.389287942 podStartE2EDuration="31.976084937s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.735234837 +0000 UTC m=+937.651597203" lastFinishedPulling="2025-11-25 19:49:43.322031832 +0000 UTC m=+965.238394198" observedRunningTime="2025-11-25 19:49:44.964411592 +0000 UTC m=+966.880773948" watchObservedRunningTime="2025-11-25 19:49:44.976084937 +0000 UTC m=+966.892447303" Nov 25 19:49:44 crc kubenswrapper[4775]: I1125 19:49:44.997265 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" podStartSLOduration=4.392080699 podStartE2EDuration="31.997249999s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.762509124 +0000 UTC m=+937.678871490" lastFinishedPulling="2025-11-25 19:49:43.367678424 +0000 UTC m=+965.284040790" observedRunningTime="2025-11-25 19:49:44.993808466 +0000 UTC m=+966.910170832" watchObservedRunningTime="2025-11-25 19:49:44.997249999 +0000 UTC m=+966.913612365" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.159897 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-knzcj" podStartSLOduration=3.882403676 podStartE2EDuration="32.159880933s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.083815555 +0000 UTC m=+937.000177921" lastFinishedPulling="2025-11-25 19:49:43.361292772 +0000 UTC m=+965.277655178" observedRunningTime="2025-11-25 19:49:45.159542294 +0000 UTC m=+967.075904660" watchObservedRunningTime="2025-11-25 19:49:45.159880933 +0000 UTC m=+967.076243299" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.832143 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl4vl" event={"ID":"648a6737-d8ad-461b-a889-779a45c37ed7","Type":"ContainerStarted","Data":"07ae68161234d90071f9373db54a466988ce096e209151f9f10ae0f8847a7f5e"} Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.833960 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" event={"ID":"74ce2e86-cedf-4014-8d4c-8c126d58e7c9","Type":"ContainerStarted","Data":"b5a444dc9e42a717c9960f9ff39ee2537da12b300202f1e6b504b68f29252e06"} Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.834195 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.835501 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc" event={"ID":"e7a4f97f-5b6f-4347-b156-d96e1be21183","Type":"ContainerStarted","Data":"d1ecfe73f34f54d524adb22eca6b2a852d791d5c6bb40c09ef68815d95d9ccb8"} Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.835622 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.836524 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.837108 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh" event={"ID":"d01f394d-062f-4736-a7fa-abe501a5b2d9","Type":"ContainerStarted","Data":"4a160d90a4d621ed67049ba44eeb5e36f6d146fa94b93066be09295017ebea83"} Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.837246 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.838730 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459" event={"ID":"a778d0b3-0440-4c61-8a61-59524e36835e","Type":"ContainerStarted","Data":"4936d5a7f4556d1595d046d0aa826a78febcc1f05323d8756dc5fcc849c3fd1c"} Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.838843 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.840583 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" event={"ID":"7b75a9f0-bd88-4e53-973a-0ce97e41cec8","Type":"ContainerStarted","Data":"7157fa1d9acfc122dfa3a622f542cd0da09dac4f3245c505bdee5b739d1f094d"} Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.840640 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.841960 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg" event={"ID":"88abb3bd-eb47-4185-a1a9-4f300ed99167","Type":"ContainerStarted","Data":"83d4988c97f88383d3d07b325c9029d1727474d54877d50c798b28f1738f850c"} Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.842147 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.844367 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-vn52j" event={"ID":"f1338e2e-e4e6-4c4b-a410-72e2d1acab0d","Type":"ContainerStarted","Data":"c0c9d8ede2bede429c0d1ebaf8efbe2a4608cd98dad9fc235f8d62a018857608"} Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.844484 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-955677c94-vn52j" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.844721 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.845704 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6" event={"ID":"d8f444e1-3e73-4daa-a5f0-4fe2236a691b","Type":"ContainerStarted","Data":"65f4720f51d53596a0390a63c2a8f11d043146cb7001bd20f4067d71122f4ffa"} Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.846228 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.847472 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-qdknn" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.848923 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-s5nz8" Nov 25 19:49:45 crc kubenswrapper[4775]: E1125 19:49:45.848942 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" podUID="6910455f-354f-4f91-8333-5cb54be87db6" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.849109 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qnklp" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.849250 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-892tw" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.859360 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pl4vl" podStartSLOduration=13.697196037 podStartE2EDuration="16.859342745s" podCreationTimestamp="2025-11-25 19:49:29 +0000 UTC" firstStartedPulling="2025-11-25 19:49:41.997448448 +0000 UTC m=+963.913810814" lastFinishedPulling="2025-11-25 19:49:45.159595156 +0000 UTC m=+967.075957522" observedRunningTime="2025-11-25 19:49:45.857349812 +0000 UTC m=+967.773712168" watchObservedRunningTime="2025-11-25 19:49:45.859342745 +0000 UTC m=+967.775705111" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.874948 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6" podStartSLOduration=2.808078956 podStartE2EDuration="32.874933296s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.092145 +0000 UTC m=+937.008507366" lastFinishedPulling="2025-11-25 19:49:45.15899934 +0000 UTC m=+967.075361706" observedRunningTime="2025-11-25 19:49:45.87281985 +0000 UTC m=+967.789182216" watchObservedRunningTime="2025-11-25 19:49:45.874933296 +0000 UTC m=+967.791295652" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.945285 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459" podStartSLOduration=2.898910361 podStartE2EDuration="32.945081132s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.230623882 +0000 UTC m=+937.146986248" lastFinishedPulling="2025-11-25 19:49:45.276794653 +0000 UTC m=+967.193157019" observedRunningTime="2025-11-25 19:49:45.921694639 +0000 UTC m=+967.838057005" watchObservedRunningTime="2025-11-25 19:49:45.945081132 +0000 UTC m=+967.861443498" Nov 25 19:49:45 crc kubenswrapper[4775]: I1125 19:49:45.974861 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh" podStartSLOduration=3.642938395 podStartE2EDuration="32.974844366s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.71793592 +0000 UTC m=+937.634298286" lastFinishedPulling="2025-11-25 19:49:45.049841891 +0000 UTC m=+966.966204257" observedRunningTime="2025-11-25 19:49:45.973971993 +0000 UTC m=+967.890334359" watchObservedRunningTime="2025-11-25 19:49:45.974844366 +0000 UTC m=+967.891206732" Nov 25 19:49:46 crc kubenswrapper[4775]: I1125 19:49:46.058026 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-sj9pg" podStartSLOduration=4.96165233 podStartE2EDuration="33.058003873s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.232625486 +0000 UTC m=+937.148987852" lastFinishedPulling="2025-11-25 19:49:43.328977029 +0000 UTC m=+965.245339395" observedRunningTime="2025-11-25 19:49:46.053032859 +0000 UTC m=+967.969395225" watchObservedRunningTime="2025-11-25 19:49:46.058003873 +0000 UTC m=+967.974366239" Nov 25 19:49:46 crc kubenswrapper[4775]: I1125 19:49:46.078278 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-955677c94-vn52j" podStartSLOduration=2.3917628349999998 podStartE2EDuration="33.07826164s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:14.661557274 +0000 UTC m=+936.577919640" lastFinishedPulling="2025-11-25 19:49:45.348056069 +0000 UTC m=+967.264418445" observedRunningTime="2025-11-25 19:49:46.076504383 +0000 UTC m=+967.992866749" watchObservedRunningTime="2025-11-25 19:49:46.07826164 +0000 UTC m=+967.994623996" Nov 25 19:49:46 crc kubenswrapper[4775]: I1125 19:49:46.111257 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc" podStartSLOduration=3.4535195180000002 podStartE2EDuration="33.111236972s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.690390766 +0000 UTC m=+937.606753132" lastFinishedPulling="2025-11-25 19:49:45.34810822 +0000 UTC m=+967.264470586" observedRunningTime="2025-11-25 19:49:46.107066719 +0000 UTC m=+968.023429105" watchObservedRunningTime="2025-11-25 19:49:46.111236972 +0000 UTC m=+968.027599338" Nov 25 19:49:46 crc kubenswrapper[4775]: I1125 19:49:46.144382 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" podStartSLOduration=29.820467558 podStartE2EDuration="33.144359517s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:39.845786275 +0000 UTC m=+961.762148641" lastFinishedPulling="2025-11-25 19:49:43.169678194 +0000 UTC m=+965.086040600" observedRunningTime="2025-11-25 19:49:46.135930118 +0000 UTC m=+968.052292484" watchObservedRunningTime="2025-11-25 19:49:46.144359517 +0000 UTC m=+968.060721883" Nov 25 19:49:46 crc kubenswrapper[4775]: I1125 19:49:46.857351 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6" Nov 25 19:49:47 crc kubenswrapper[4775]: I1125 19:49:47.867407 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-rlx29" podStartSLOduration=7.265080934 podStartE2EDuration="34.867390347s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.741401374 +0000 UTC m=+937.657763740" lastFinishedPulling="2025-11-25 19:49:43.343710787 +0000 UTC m=+965.260073153" observedRunningTime="2025-11-25 19:49:46.190120983 +0000 UTC m=+968.106483349" watchObservedRunningTime="2025-11-25 19:49:47.867390347 +0000 UTC m=+969.783752723" Nov 25 19:49:48 crc kubenswrapper[4775]: I1125 19:49:48.871598 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hg4df" event={"ID":"592eda0a-f963-48bf-9902-3e52795051e3","Type":"ContainerStarted","Data":"9c69c134162698571cd981b331e6ac7cb9154c7de9fdaa9ba9c792cc4b0a2725"} Nov 25 19:49:48 crc kubenswrapper[4775]: I1125 19:49:48.896985 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hg4df" podStartSLOduration=2.291421347 podStartE2EDuration="34.896967099s" podCreationTimestamp="2025-11-25 19:49:14 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.690718654 +0000 UTC m=+937.607081010" lastFinishedPulling="2025-11-25 19:49:48.296264396 +0000 UTC m=+970.212626762" observedRunningTime="2025-11-25 19:49:48.89292931 +0000 UTC m=+970.809291676" watchObservedRunningTime="2025-11-25 19:49:48.896967099 +0000 UTC m=+970.813329465" Nov 25 19:49:49 crc kubenswrapper[4775]: I1125 19:49:49.835205 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tcv4j" Nov 25 19:49:49 crc kubenswrapper[4775]: I1125 19:49:49.890858 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv" Nov 25 19:49:50 crc kubenswrapper[4775]: I1125 19:49:50.173181 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7f84d9dcfc-rrwgl" Nov 25 19:49:50 crc kubenswrapper[4775]: I1125 19:49:50.286593 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:50 crc kubenswrapper[4775]: I1125 19:49:50.286671 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:50 crc kubenswrapper[4775]: I1125 19:49:50.350727 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:50 crc kubenswrapper[4775]: I1125 19:49:50.966351 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:49:51 crc kubenswrapper[4775]: I1125 19:49:51.029789 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pl4vl"] Nov 25 19:49:52 crc kubenswrapper[4775]: I1125 19:49:52.925951 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pl4vl" podUID="648a6737-d8ad-461b-a889-779a45c37ed7" containerName="registry-server" containerID="cri-o://07ae68161234d90071f9373db54a466988ce096e209151f9f10ae0f8847a7f5e" gracePeriod=2 Nov 25 19:49:53 crc kubenswrapper[4775]: I1125 19:49:53.844964 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-955677c94-vn52j" Nov 25 19:49:53 crc kubenswrapper[4775]: I1125 19:49:53.939992 4775 generic.go:334] "Generic (PLEG): container finished" podID="648a6737-d8ad-461b-a889-779a45c37ed7" containerID="07ae68161234d90071f9373db54a466988ce096e209151f9f10ae0f8847a7f5e" exitCode=0 Nov 25 19:49:53 crc kubenswrapper[4775]: I1125 19:49:53.940050 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl4vl" event={"ID":"648a6737-d8ad-461b-a889-779a45c37ed7","Type":"ContainerDied","Data":"07ae68161234d90071f9373db54a466988ce096e209151f9f10ae0f8847a7f5e"} Nov 25 19:49:53 crc kubenswrapper[4775]: I1125 19:49:53.947150 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-86lv6" Nov 25 19:49:54 crc kubenswrapper[4775]: I1125 19:49:54.086860 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-w8459" Nov 25 19:49:54 crc kubenswrapper[4775]: I1125 19:49:54.095191 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-sd2lc" Nov 25 19:49:54 crc kubenswrapper[4775]: I1125 19:49:54.370968 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d77b94747-w2rwh" Nov 25 19:50:00 crc kubenswrapper[4775]: I1125 19:50:00.232053 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:50:00 crc kubenswrapper[4775]: I1125 19:50:00.249885 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/648a6737-d8ad-461b-a889-779a45c37ed7-utilities\") pod \"648a6737-d8ad-461b-a889-779a45c37ed7\" (UID: \"648a6737-d8ad-461b-a889-779a45c37ed7\") " Nov 25 19:50:00 crc kubenswrapper[4775]: I1125 19:50:00.249927 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhfpq\" (UniqueName: \"kubernetes.io/projected/648a6737-d8ad-461b-a889-779a45c37ed7-kube-api-access-qhfpq\") pod \"648a6737-d8ad-461b-a889-779a45c37ed7\" (UID: \"648a6737-d8ad-461b-a889-779a45c37ed7\") " Nov 25 19:50:00 crc kubenswrapper[4775]: I1125 19:50:00.250014 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/648a6737-d8ad-461b-a889-779a45c37ed7-catalog-content\") pod \"648a6737-d8ad-461b-a889-779a45c37ed7\" (UID: \"648a6737-d8ad-461b-a889-779a45c37ed7\") " Nov 25 19:50:00 crc kubenswrapper[4775]: I1125 19:50:00.256551 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/648a6737-d8ad-461b-a889-779a45c37ed7-kube-api-access-qhfpq" (OuterVolumeSpecName: "kube-api-access-qhfpq") pod "648a6737-d8ad-461b-a889-779a45c37ed7" (UID: "648a6737-d8ad-461b-a889-779a45c37ed7"). InnerVolumeSpecName "kube-api-access-qhfpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:50:00 crc kubenswrapper[4775]: I1125 19:50:00.259759 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/648a6737-d8ad-461b-a889-779a45c37ed7-utilities" (OuterVolumeSpecName: "utilities") pod "648a6737-d8ad-461b-a889-779a45c37ed7" (UID: "648a6737-d8ad-461b-a889-779a45c37ed7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:50:00 crc kubenswrapper[4775]: I1125 19:50:00.301784 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/648a6737-d8ad-461b-a889-779a45c37ed7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "648a6737-d8ad-461b-a889-779a45c37ed7" (UID: "648a6737-d8ad-461b-a889-779a45c37ed7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:50:00 crc kubenswrapper[4775]: I1125 19:50:00.351489 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/648a6737-d8ad-461b-a889-779a45c37ed7-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:00 crc kubenswrapper[4775]: I1125 19:50:00.351534 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhfpq\" (UniqueName: \"kubernetes.io/projected/648a6737-d8ad-461b-a889-779a45c37ed7-kube-api-access-qhfpq\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:00 crc kubenswrapper[4775]: I1125 19:50:00.351549 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/648a6737-d8ad-461b-a889-779a45c37ed7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:01 crc kubenswrapper[4775]: I1125 19:50:01.011305 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl4vl" event={"ID":"648a6737-d8ad-461b-a889-779a45c37ed7","Type":"ContainerDied","Data":"06a0e7c9065a2b8220f5f7a8f5c4a6b85019e2f2c29a76bc9073ba07ab2da8bd"} Nov 25 19:50:01 crc kubenswrapper[4775]: I1125 19:50:01.011798 4775 scope.go:117] "RemoveContainer" containerID="07ae68161234d90071f9373db54a466988ce096e209151f9f10ae0f8847a7f5e" Nov 25 19:50:01 crc kubenswrapper[4775]: I1125 19:50:01.011400 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl4vl" Nov 25 19:50:01 crc kubenswrapper[4775]: I1125 19:50:01.058879 4775 scope.go:117] "RemoveContainer" containerID="817e362328351edfb73fd95f9a562310ac4c5f0a01e4c8e75ce17bc262349a39" Nov 25 19:50:01 crc kubenswrapper[4775]: I1125 19:50:01.067782 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pl4vl"] Nov 25 19:50:01 crc kubenswrapper[4775]: I1125 19:50:01.079445 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pl4vl"] Nov 25 19:50:01 crc kubenswrapper[4775]: I1125 19:50:01.086128 4775 scope.go:117] "RemoveContainer" containerID="c881b4dd25a497f6933e7431d257183481487eddd449885f888661614542bd32" Nov 25 19:50:02 crc kubenswrapper[4775]: I1125 19:50:02.860956 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="648a6737-d8ad-461b-a889-779a45c37ed7" path="/var/lib/kubelet/pods/648a6737-d8ad-461b-a889-779a45c37ed7/volumes" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.388266 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6bhcm"] Nov 25 19:50:03 crc kubenswrapper[4775]: E1125 19:50:03.388853 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83480cff-9fa1-4812-90a4-0bedd4ba5637" containerName="extract-content" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.388892 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="83480cff-9fa1-4812-90a4-0bedd4ba5637" containerName="extract-content" Nov 25 19:50:03 crc kubenswrapper[4775]: E1125 19:50:03.388917 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648a6737-d8ad-461b-a889-779a45c37ed7" containerName="extract-content" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.388934 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="648a6737-d8ad-461b-a889-779a45c37ed7" containerName="extract-content" Nov 25 19:50:03 crc kubenswrapper[4775]: E1125 19:50:03.388986 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648a6737-d8ad-461b-a889-779a45c37ed7" containerName="extract-utilities" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.389005 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="648a6737-d8ad-461b-a889-779a45c37ed7" containerName="extract-utilities" Nov 25 19:50:03 crc kubenswrapper[4775]: E1125 19:50:03.389031 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648a6737-d8ad-461b-a889-779a45c37ed7" containerName="registry-server" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.389048 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="648a6737-d8ad-461b-a889-779a45c37ed7" containerName="registry-server" Nov 25 19:50:03 crc kubenswrapper[4775]: E1125 19:50:03.389076 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83480cff-9fa1-4812-90a4-0bedd4ba5637" containerName="registry-server" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.389092 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="83480cff-9fa1-4812-90a4-0bedd4ba5637" containerName="registry-server" Nov 25 19:50:03 crc kubenswrapper[4775]: E1125 19:50:03.389138 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83480cff-9fa1-4812-90a4-0bedd4ba5637" containerName="extract-utilities" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.389157 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="83480cff-9fa1-4812-90a4-0bedd4ba5637" containerName="extract-utilities" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.389501 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="648a6737-d8ad-461b-a889-779a45c37ed7" containerName="registry-server" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.389551 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="83480cff-9fa1-4812-90a4-0bedd4ba5637" containerName="registry-server" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.391982 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.396328 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9a71be-4cc8-4d23-87f5-5658b8189e68-utilities\") pod \"redhat-marketplace-6bhcm\" (UID: \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\") " pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.396485 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pvdt\" (UniqueName: \"kubernetes.io/projected/8f9a71be-4cc8-4d23-87f5-5658b8189e68-kube-api-access-8pvdt\") pod \"redhat-marketplace-6bhcm\" (UID: \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\") " pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.396573 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9a71be-4cc8-4d23-87f5-5658b8189e68-catalog-content\") pod \"redhat-marketplace-6bhcm\" (UID: \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\") " pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.415243 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bhcm"] Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.498363 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pvdt\" (UniqueName: \"kubernetes.io/projected/8f9a71be-4cc8-4d23-87f5-5658b8189e68-kube-api-access-8pvdt\") pod \"redhat-marketplace-6bhcm\" (UID: \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\") " pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.498492 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9a71be-4cc8-4d23-87f5-5658b8189e68-catalog-content\") pod \"redhat-marketplace-6bhcm\" (UID: \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\") " pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.498734 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9a71be-4cc8-4d23-87f5-5658b8189e68-utilities\") pod \"redhat-marketplace-6bhcm\" (UID: \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\") " pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.499148 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9a71be-4cc8-4d23-87f5-5658b8189e68-catalog-content\") pod \"redhat-marketplace-6bhcm\" (UID: \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\") " pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.499442 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9a71be-4cc8-4d23-87f5-5658b8189e68-utilities\") pod \"redhat-marketplace-6bhcm\" (UID: \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\") " pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.532022 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pvdt\" (UniqueName: \"kubernetes.io/projected/8f9a71be-4cc8-4d23-87f5-5658b8189e68-kube-api-access-8pvdt\") pod \"redhat-marketplace-6bhcm\" (UID: \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\") " pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:03 crc kubenswrapper[4775]: I1125 19:50:03.724384 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:04 crc kubenswrapper[4775]: I1125 19:50:04.572756 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bhcm"] Nov 25 19:50:04 crc kubenswrapper[4775]: W1125 19:50:04.573154 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f9a71be_4cc8_4d23_87f5_5658b8189e68.slice/crio-ec61fe0f1ae312f165f11c3fa6f33371da9e09c07b677621ec8e93439a890730 WatchSource:0}: Error finding container ec61fe0f1ae312f165f11c3fa6f33371da9e09c07b677621ec8e93439a890730: Status 404 returned error can't find the container with id ec61fe0f1ae312f165f11c3fa6f33371da9e09c07b677621ec8e93439a890730 Nov 25 19:50:05 crc kubenswrapper[4775]: I1125 19:50:05.047745 4775 generic.go:334] "Generic (PLEG): container finished" podID="8f9a71be-4cc8-4d23-87f5-5658b8189e68" containerID="29af235c2f1772805a068e202e9397d9f956b5fcecbc00530e8d4a49ca61ce46" exitCode=0 Nov 25 19:50:05 crc kubenswrapper[4775]: I1125 19:50:05.047815 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bhcm" event={"ID":"8f9a71be-4cc8-4d23-87f5-5658b8189e68","Type":"ContainerDied","Data":"29af235c2f1772805a068e202e9397d9f956b5fcecbc00530e8d4a49ca61ce46"} Nov 25 19:50:05 crc kubenswrapper[4775]: I1125 19:50:05.047842 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bhcm" event={"ID":"8f9a71be-4cc8-4d23-87f5-5658b8189e68","Type":"ContainerStarted","Data":"ec61fe0f1ae312f165f11c3fa6f33371da9e09c07b677621ec8e93439a890730"} Nov 25 19:50:05 crc kubenswrapper[4775]: I1125 19:50:05.049520 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 19:50:05 crc kubenswrapper[4775]: I1125 19:50:05.051416 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" event={"ID":"6910455f-354f-4f91-8333-5cb54be87db6","Type":"ContainerStarted","Data":"1b21c0912a8ae0d32cb237596f50915517728069965f9d880a88c2bf6ee75c63"} Nov 25 19:50:05 crc kubenswrapper[4775]: I1125 19:50:05.051616 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" Nov 25 19:50:06 crc kubenswrapper[4775]: I1125 19:50:06.060589 4775 generic.go:334] "Generic (PLEG): container finished" podID="8f9a71be-4cc8-4d23-87f5-5658b8189e68" containerID="b73355c42089c040575b65e31dd2d574c1facb5a8fd3848edd11a8244f14105a" exitCode=0 Nov 25 19:50:06 crc kubenswrapper[4775]: I1125 19:50:06.060692 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bhcm" event={"ID":"8f9a71be-4cc8-4d23-87f5-5658b8189e68","Type":"ContainerDied","Data":"b73355c42089c040575b65e31dd2d574c1facb5a8fd3848edd11a8244f14105a"} Nov 25 19:50:06 crc kubenswrapper[4775]: I1125 19:50:06.087850 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" podStartSLOduration=4.161056094 podStartE2EDuration="53.087815641s" podCreationTimestamp="2025-11-25 19:49:13 +0000 UTC" firstStartedPulling="2025-11-25 19:49:15.092287984 +0000 UTC m=+937.008650340" lastFinishedPulling="2025-11-25 19:50:04.019047521 +0000 UTC m=+985.935409887" observedRunningTime="2025-11-25 19:50:05.096582826 +0000 UTC m=+987.012945232" watchObservedRunningTime="2025-11-25 19:50:06.087815641 +0000 UTC m=+988.004178017" Nov 25 19:50:07 crc kubenswrapper[4775]: I1125 19:50:07.072087 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bhcm" event={"ID":"8f9a71be-4cc8-4d23-87f5-5658b8189e68","Type":"ContainerStarted","Data":"990cb9d03778d06dfc420fa2b6a08f13c91f269e696f45acf9ba8a592714c8ab"} Nov 25 19:50:07 crc kubenswrapper[4775]: I1125 19:50:07.101393 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6bhcm" podStartSLOduration=2.48510474 podStartE2EDuration="4.101369053s" podCreationTimestamp="2025-11-25 19:50:03 +0000 UTC" firstStartedPulling="2025-11-25 19:50:05.04927576 +0000 UTC m=+986.965638126" lastFinishedPulling="2025-11-25 19:50:06.665540073 +0000 UTC m=+988.581902439" observedRunningTime="2025-11-25 19:50:07.096224076 +0000 UTC m=+989.012586462" watchObservedRunningTime="2025-11-25 19:50:07.101369053 +0000 UTC m=+989.017731459" Nov 25 19:50:13 crc kubenswrapper[4775]: I1125 19:50:13.724717 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:13 crc kubenswrapper[4775]: I1125 19:50:13.725279 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:13 crc kubenswrapper[4775]: I1125 19:50:13.806834 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:14 crc kubenswrapper[4775]: I1125 19:50:14.018480 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-hjdwf" Nov 25 19:50:14 crc kubenswrapper[4775]: I1125 19:50:14.205561 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:14 crc kubenswrapper[4775]: I1125 19:50:14.272827 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bhcm"] Nov 25 19:50:16 crc kubenswrapper[4775]: I1125 19:50:16.142492 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6bhcm" podUID="8f9a71be-4cc8-4d23-87f5-5658b8189e68" containerName="registry-server" containerID="cri-o://990cb9d03778d06dfc420fa2b6a08f13c91f269e696f45acf9ba8a592714c8ab" gracePeriod=2 Nov 25 19:50:16 crc kubenswrapper[4775]: I1125 19:50:16.686277 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:16 crc kubenswrapper[4775]: I1125 19:50:16.801281 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9a71be-4cc8-4d23-87f5-5658b8189e68-catalog-content\") pod \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\" (UID: \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\") " Nov 25 19:50:16 crc kubenswrapper[4775]: I1125 19:50:16.801467 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pvdt\" (UniqueName: \"kubernetes.io/projected/8f9a71be-4cc8-4d23-87f5-5658b8189e68-kube-api-access-8pvdt\") pod \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\" (UID: \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\") " Nov 25 19:50:16 crc kubenswrapper[4775]: I1125 19:50:16.801519 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9a71be-4cc8-4d23-87f5-5658b8189e68-utilities\") pod \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\" (UID: \"8f9a71be-4cc8-4d23-87f5-5658b8189e68\") " Nov 25 19:50:16 crc kubenswrapper[4775]: I1125 19:50:16.802434 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f9a71be-4cc8-4d23-87f5-5658b8189e68-utilities" (OuterVolumeSpecName: "utilities") pod "8f9a71be-4cc8-4d23-87f5-5658b8189e68" (UID: "8f9a71be-4cc8-4d23-87f5-5658b8189e68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:50:16 crc kubenswrapper[4775]: I1125 19:50:16.809479 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9a71be-4cc8-4d23-87f5-5658b8189e68-kube-api-access-8pvdt" (OuterVolumeSpecName: "kube-api-access-8pvdt") pod "8f9a71be-4cc8-4d23-87f5-5658b8189e68" (UID: "8f9a71be-4cc8-4d23-87f5-5658b8189e68"). InnerVolumeSpecName "kube-api-access-8pvdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:50:16 crc kubenswrapper[4775]: I1125 19:50:16.822865 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f9a71be-4cc8-4d23-87f5-5658b8189e68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f9a71be-4cc8-4d23-87f5-5658b8189e68" (UID: "8f9a71be-4cc8-4d23-87f5-5658b8189e68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:50:16 crc kubenswrapper[4775]: I1125 19:50:16.903273 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pvdt\" (UniqueName: \"kubernetes.io/projected/8f9a71be-4cc8-4d23-87f5-5658b8189e68-kube-api-access-8pvdt\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:16 crc kubenswrapper[4775]: I1125 19:50:16.903321 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9a71be-4cc8-4d23-87f5-5658b8189e68-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:16 crc kubenswrapper[4775]: I1125 19:50:16.903341 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9a71be-4cc8-4d23-87f5-5658b8189e68-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.304256 4775 generic.go:334] "Generic (PLEG): container finished" podID="8f9a71be-4cc8-4d23-87f5-5658b8189e68" containerID="990cb9d03778d06dfc420fa2b6a08f13c91f269e696f45acf9ba8a592714c8ab" exitCode=0 Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.304334 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bhcm" Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.304332 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bhcm" event={"ID":"8f9a71be-4cc8-4d23-87f5-5658b8189e68","Type":"ContainerDied","Data":"990cb9d03778d06dfc420fa2b6a08f13c91f269e696f45acf9ba8a592714c8ab"} Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.304486 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bhcm" event={"ID":"8f9a71be-4cc8-4d23-87f5-5658b8189e68","Type":"ContainerDied","Data":"ec61fe0f1ae312f165f11c3fa6f33371da9e09c07b677621ec8e93439a890730"} Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.304528 4775 scope.go:117] "RemoveContainer" containerID="990cb9d03778d06dfc420fa2b6a08f13c91f269e696f45acf9ba8a592714c8ab" Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.338570 4775 scope.go:117] "RemoveContainer" containerID="b73355c42089c040575b65e31dd2d574c1facb5a8fd3848edd11a8244f14105a" Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.348750 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bhcm"] Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.392463 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bhcm"] Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.398179 4775 scope.go:117] "RemoveContainer" containerID="29af235c2f1772805a068e202e9397d9f956b5fcecbc00530e8d4a49ca61ce46" Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.453794 4775 scope.go:117] "RemoveContainer" containerID="990cb9d03778d06dfc420fa2b6a08f13c91f269e696f45acf9ba8a592714c8ab" Nov 25 19:50:17 crc kubenswrapper[4775]: E1125 19:50:17.454383 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"990cb9d03778d06dfc420fa2b6a08f13c91f269e696f45acf9ba8a592714c8ab\": container with ID starting with 990cb9d03778d06dfc420fa2b6a08f13c91f269e696f45acf9ba8a592714c8ab not found: ID does not exist" containerID="990cb9d03778d06dfc420fa2b6a08f13c91f269e696f45acf9ba8a592714c8ab" Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.454509 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"990cb9d03778d06dfc420fa2b6a08f13c91f269e696f45acf9ba8a592714c8ab"} err="failed to get container status \"990cb9d03778d06dfc420fa2b6a08f13c91f269e696f45acf9ba8a592714c8ab\": rpc error: code = NotFound desc = could not find container \"990cb9d03778d06dfc420fa2b6a08f13c91f269e696f45acf9ba8a592714c8ab\": container with ID starting with 990cb9d03778d06dfc420fa2b6a08f13c91f269e696f45acf9ba8a592714c8ab not found: ID does not exist" Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.454719 4775 scope.go:117] "RemoveContainer" containerID="b73355c42089c040575b65e31dd2d574c1facb5a8fd3848edd11a8244f14105a" Nov 25 19:50:17 crc kubenswrapper[4775]: E1125 19:50:17.455190 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73355c42089c040575b65e31dd2d574c1facb5a8fd3848edd11a8244f14105a\": container with ID starting with b73355c42089c040575b65e31dd2d574c1facb5a8fd3848edd11a8244f14105a not found: ID does not exist" containerID="b73355c42089c040575b65e31dd2d574c1facb5a8fd3848edd11a8244f14105a" Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.455254 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73355c42089c040575b65e31dd2d574c1facb5a8fd3848edd11a8244f14105a"} err="failed to get container status \"b73355c42089c040575b65e31dd2d574c1facb5a8fd3848edd11a8244f14105a\": rpc error: code = NotFound desc = could not find container \"b73355c42089c040575b65e31dd2d574c1facb5a8fd3848edd11a8244f14105a\": container with ID starting with b73355c42089c040575b65e31dd2d574c1facb5a8fd3848edd11a8244f14105a not found: ID does not exist" Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.455295 4775 scope.go:117] "RemoveContainer" containerID="29af235c2f1772805a068e202e9397d9f956b5fcecbc00530e8d4a49ca61ce46" Nov 25 19:50:17 crc kubenswrapper[4775]: E1125 19:50:17.455800 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29af235c2f1772805a068e202e9397d9f956b5fcecbc00530e8d4a49ca61ce46\": container with ID starting with 29af235c2f1772805a068e202e9397d9f956b5fcecbc00530e8d4a49ca61ce46 not found: ID does not exist" containerID="29af235c2f1772805a068e202e9397d9f956b5fcecbc00530e8d4a49ca61ce46" Nov 25 19:50:17 crc kubenswrapper[4775]: I1125 19:50:17.455926 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29af235c2f1772805a068e202e9397d9f956b5fcecbc00530e8d4a49ca61ce46"} err="failed to get container status \"29af235c2f1772805a068e202e9397d9f956b5fcecbc00530e8d4a49ca61ce46\": rpc error: code = NotFound desc = could not find container \"29af235c2f1772805a068e202e9397d9f956b5fcecbc00530e8d4a49ca61ce46\": container with ID starting with 29af235c2f1772805a068e202e9397d9f956b5fcecbc00530e8d4a49ca61ce46 not found: ID does not exist" Nov 25 19:50:18 crc kubenswrapper[4775]: I1125 19:50:18.866412 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f9a71be-4cc8-4d23-87f5-5658b8189e68" path="/var/lib/kubelet/pods/8f9a71be-4cc8-4d23-87f5-5658b8189e68/volumes" Nov 25 19:50:30 crc kubenswrapper[4775]: I1125 19:50:30.939017 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6nnbh"] Nov 25 19:50:30 crc kubenswrapper[4775]: E1125 19:50:30.939871 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9a71be-4cc8-4d23-87f5-5658b8189e68" containerName="registry-server" Nov 25 19:50:30 crc kubenswrapper[4775]: I1125 19:50:30.939888 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9a71be-4cc8-4d23-87f5-5658b8189e68" containerName="registry-server" Nov 25 19:50:30 crc kubenswrapper[4775]: E1125 19:50:30.939920 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9a71be-4cc8-4d23-87f5-5658b8189e68" containerName="extract-content" Nov 25 19:50:30 crc kubenswrapper[4775]: I1125 19:50:30.939928 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9a71be-4cc8-4d23-87f5-5658b8189e68" containerName="extract-content" Nov 25 19:50:30 crc kubenswrapper[4775]: E1125 19:50:30.939945 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9a71be-4cc8-4d23-87f5-5658b8189e68" containerName="extract-utilities" Nov 25 19:50:30 crc kubenswrapper[4775]: I1125 19:50:30.939953 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9a71be-4cc8-4d23-87f5-5658b8189e68" containerName="extract-utilities" Nov 25 19:50:30 crc kubenswrapper[4775]: I1125 19:50:30.940125 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9a71be-4cc8-4d23-87f5-5658b8189e68" containerName="registry-server" Nov 25 19:50:30 crc kubenswrapper[4775]: I1125 19:50:30.940906 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6nnbh" Nov 25 19:50:30 crc kubenswrapper[4775]: I1125 19:50:30.950408 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 19:50:30 crc kubenswrapper[4775]: I1125 19:50:30.950490 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 19:50:30 crc kubenswrapper[4775]: I1125 19:50:30.950823 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5mlrs" Nov 25 19:50:30 crc kubenswrapper[4775]: I1125 19:50:30.964021 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 19:50:30 crc kubenswrapper[4775]: I1125 19:50:30.971067 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6nnbh"] Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.023333 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fd83c3-34fe-4c70-92ff-3633d221418a-config\") pod \"dnsmasq-dns-675f4bcbfc-6nnbh\" (UID: \"19fd83c3-34fe-4c70-92ff-3633d221418a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6nnbh" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.023457 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msm9k\" (UniqueName: \"kubernetes.io/projected/19fd83c3-34fe-4c70-92ff-3633d221418a-kube-api-access-msm9k\") pod \"dnsmasq-dns-675f4bcbfc-6nnbh\" (UID: \"19fd83c3-34fe-4c70-92ff-3633d221418a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6nnbh" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.046826 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nhf6k"] Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.048221 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.050926 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.100183 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nhf6k"] Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.139626 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33d9acd4-ae09-4721-b3db-6b4db93325b4-config\") pod \"dnsmasq-dns-78dd6ddcc-nhf6k\" (UID: \"33d9acd4-ae09-4721-b3db-6b4db93325b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.139741 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msm9k\" (UniqueName: \"kubernetes.io/projected/19fd83c3-34fe-4c70-92ff-3633d221418a-kube-api-access-msm9k\") pod \"dnsmasq-dns-675f4bcbfc-6nnbh\" (UID: \"19fd83c3-34fe-4c70-92ff-3633d221418a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6nnbh" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.139798 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33d9acd4-ae09-4721-b3db-6b4db93325b4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nhf6k\" (UID: \"33d9acd4-ae09-4721-b3db-6b4db93325b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.139818 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fd83c3-34fe-4c70-92ff-3633d221418a-config\") pod \"dnsmasq-dns-675f4bcbfc-6nnbh\" (UID: \"19fd83c3-34fe-4c70-92ff-3633d221418a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6nnbh" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.139842 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d559t\" (UniqueName: \"kubernetes.io/projected/33d9acd4-ae09-4721-b3db-6b4db93325b4-kube-api-access-d559t\") pod \"dnsmasq-dns-78dd6ddcc-nhf6k\" (UID: \"33d9acd4-ae09-4721-b3db-6b4db93325b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.140812 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fd83c3-34fe-4c70-92ff-3633d221418a-config\") pod \"dnsmasq-dns-675f4bcbfc-6nnbh\" (UID: \"19fd83c3-34fe-4c70-92ff-3633d221418a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6nnbh" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.163008 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msm9k\" (UniqueName: \"kubernetes.io/projected/19fd83c3-34fe-4c70-92ff-3633d221418a-kube-api-access-msm9k\") pod \"dnsmasq-dns-675f4bcbfc-6nnbh\" (UID: \"19fd83c3-34fe-4c70-92ff-3633d221418a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6nnbh" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.241465 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33d9acd4-ae09-4721-b3db-6b4db93325b4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nhf6k\" (UID: \"33d9acd4-ae09-4721-b3db-6b4db93325b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.241713 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d559t\" (UniqueName: \"kubernetes.io/projected/33d9acd4-ae09-4721-b3db-6b4db93325b4-kube-api-access-d559t\") pod \"dnsmasq-dns-78dd6ddcc-nhf6k\" (UID: \"33d9acd4-ae09-4721-b3db-6b4db93325b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.241838 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33d9acd4-ae09-4721-b3db-6b4db93325b4-config\") pod \"dnsmasq-dns-78dd6ddcc-nhf6k\" (UID: \"33d9acd4-ae09-4721-b3db-6b4db93325b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.242394 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33d9acd4-ae09-4721-b3db-6b4db93325b4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nhf6k\" (UID: \"33d9acd4-ae09-4721-b3db-6b4db93325b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.242584 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33d9acd4-ae09-4721-b3db-6b4db93325b4-config\") pod \"dnsmasq-dns-78dd6ddcc-nhf6k\" (UID: \"33d9acd4-ae09-4721-b3db-6b4db93325b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.257642 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d559t\" (UniqueName: \"kubernetes.io/projected/33d9acd4-ae09-4721-b3db-6b4db93325b4-kube-api-access-d559t\") pod \"dnsmasq-dns-78dd6ddcc-nhf6k\" (UID: \"33d9acd4-ae09-4721-b3db-6b4db93325b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.276326 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6nnbh" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.385256 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.816788 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6nnbh"] Nov 25 19:50:31 crc kubenswrapper[4775]: W1125 19:50:31.820024 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19fd83c3_34fe_4c70_92ff_3633d221418a.slice/crio-a4c1168530c450bec4b9acf9eb5cbacbad0a33d8646ee9dd30a0f80b305f65a6 WatchSource:0}: Error finding container a4c1168530c450bec4b9acf9eb5cbacbad0a33d8646ee9dd30a0f80b305f65a6: Status 404 returned error can't find the container with id a4c1168530c450bec4b9acf9eb5cbacbad0a33d8646ee9dd30a0f80b305f65a6 Nov 25 19:50:31 crc kubenswrapper[4775]: I1125 19:50:31.892876 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nhf6k"] Nov 25 19:50:31 crc kubenswrapper[4775]: W1125 19:50:31.896211 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33d9acd4_ae09_4721_b3db_6b4db93325b4.slice/crio-6cf85aba08393435eb0d3b3e54ab19e302c6ac9c5cbb706e193308c3ee16d483 WatchSource:0}: Error finding container 6cf85aba08393435eb0d3b3e54ab19e302c6ac9c5cbb706e193308c3ee16d483: Status 404 returned error can't find the container with id 6cf85aba08393435eb0d3b3e54ab19e302c6ac9c5cbb706e193308c3ee16d483 Nov 25 19:50:32 crc kubenswrapper[4775]: I1125 19:50:32.453918 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6nnbh" event={"ID":"19fd83c3-34fe-4c70-92ff-3633d221418a","Type":"ContainerStarted","Data":"a4c1168530c450bec4b9acf9eb5cbacbad0a33d8646ee9dd30a0f80b305f65a6"} Nov 25 19:50:32 crc kubenswrapper[4775]: I1125 19:50:32.455745 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" event={"ID":"33d9acd4-ae09-4721-b3db-6b4db93325b4","Type":"ContainerStarted","Data":"6cf85aba08393435eb0d3b3e54ab19e302c6ac9c5cbb706e193308c3ee16d483"} Nov 25 19:50:33 crc kubenswrapper[4775]: I1125 19:50:33.948346 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6nnbh"] Nov 25 19:50:33 crc kubenswrapper[4775]: I1125 19:50:33.970504 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krgs7"] Nov 25 19:50:33 crc kubenswrapper[4775]: I1125 19:50:33.971587 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:33 crc kubenswrapper[4775]: I1125 19:50:33.989848 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krgs7"] Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.082230 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc392a66-f0ed-43e8-ba93-74f34164ce3f-config\") pod \"dnsmasq-dns-666b6646f7-krgs7\" (UID: \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\") " pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.082280 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc392a66-f0ed-43e8-ba93-74f34164ce3f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-krgs7\" (UID: \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\") " pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.082301 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpnsl\" (UniqueName: \"kubernetes.io/projected/fc392a66-f0ed-43e8-ba93-74f34164ce3f-kube-api-access-zpnsl\") pod \"dnsmasq-dns-666b6646f7-krgs7\" (UID: \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\") " pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.183503 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc392a66-f0ed-43e8-ba93-74f34164ce3f-config\") pod \"dnsmasq-dns-666b6646f7-krgs7\" (UID: \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\") " pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.183550 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc392a66-f0ed-43e8-ba93-74f34164ce3f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-krgs7\" (UID: \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\") " pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.183580 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpnsl\" (UniqueName: \"kubernetes.io/projected/fc392a66-f0ed-43e8-ba93-74f34164ce3f-kube-api-access-zpnsl\") pod \"dnsmasq-dns-666b6646f7-krgs7\" (UID: \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\") " pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.184634 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc392a66-f0ed-43e8-ba93-74f34164ce3f-config\") pod \"dnsmasq-dns-666b6646f7-krgs7\" (UID: \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\") " pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.184705 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc392a66-f0ed-43e8-ba93-74f34164ce3f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-krgs7\" (UID: \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\") " pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.218686 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpnsl\" (UniqueName: \"kubernetes.io/projected/fc392a66-f0ed-43e8-ba93-74f34164ce3f-kube-api-access-zpnsl\") pod \"dnsmasq-dns-666b6646f7-krgs7\" (UID: \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\") " pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.230167 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nhf6k"] Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.250172 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-md645"] Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.251270 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.269195 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-md645"] Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.293406 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.391078 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9nn6\" (UniqueName: \"kubernetes.io/projected/2a4c8eb9-a366-47bf-9364-a991c7fc9836-kube-api-access-b9nn6\") pod \"dnsmasq-dns-57d769cc4f-md645\" (UID: \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\") " pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.391134 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a4c8eb9-a366-47bf-9364-a991c7fc9836-config\") pod \"dnsmasq-dns-57d769cc4f-md645\" (UID: \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\") " pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.391219 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a4c8eb9-a366-47bf-9364-a991c7fc9836-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-md645\" (UID: \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\") " pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.492291 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9nn6\" (UniqueName: \"kubernetes.io/projected/2a4c8eb9-a366-47bf-9364-a991c7fc9836-kube-api-access-b9nn6\") pod \"dnsmasq-dns-57d769cc4f-md645\" (UID: \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\") " pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.492586 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a4c8eb9-a366-47bf-9364-a991c7fc9836-config\") pod \"dnsmasq-dns-57d769cc4f-md645\" (UID: \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\") " pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.492691 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a4c8eb9-a366-47bf-9364-a991c7fc9836-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-md645\" (UID: \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\") " pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.493711 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a4c8eb9-a366-47bf-9364-a991c7fc9836-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-md645\" (UID: \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\") " pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.494705 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a4c8eb9-a366-47bf-9364-a991c7fc9836-config\") pod \"dnsmasq-dns-57d769cc4f-md645\" (UID: \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\") " pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.514014 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9nn6\" (UniqueName: \"kubernetes.io/projected/2a4c8eb9-a366-47bf-9364-a991c7fc9836-kube-api-access-b9nn6\") pod \"dnsmasq-dns-57d769cc4f-md645\" (UID: \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\") " pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.616617 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:50:34 crc kubenswrapper[4775]: I1125 19:50:34.765506 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krgs7"] Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.061432 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-md645"] Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.132533 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.134044 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.136123 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.136575 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.136990 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6vrgj" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.137213 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.137419 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.137771 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.138021 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.144983 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.303367 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-config-data\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.303437 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50995ab5-ef22-4466-9906-fab208c9a82d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.303509 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.303572 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.303615 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.303639 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.303684 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv7c6\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-kube-api-access-lv7c6\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.303714 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50995ab5-ef22-4466-9906-fab208c9a82d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.303737 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.303755 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.303775 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.372273 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.375415 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.378562 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.378970 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qsmb9" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.379073 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.379203 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.379299 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.379599 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.379955 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.385568 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.404669 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.404736 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.404809 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv7c6\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-kube-api-access-lv7c6\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.404866 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50995ab5-ef22-4466-9906-fab208c9a82d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.404910 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.404942 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.404980 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.405033 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-config-data\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.405071 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50995ab5-ef22-4466-9906-fab208c9a82d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.405106 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.405159 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.406811 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.406864 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.406888 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.407287 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.408258 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-config-data\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.409234 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.411131 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.415271 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50995ab5-ef22-4466-9906-fab208c9a82d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.416265 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50995ab5-ef22-4466-9906-fab208c9a82d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.420118 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.446353 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv7c6\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-kube-api-access-lv7c6\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.447454 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.464055 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.508044 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.508092 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.508117 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.508148 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.508167 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.508185 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.508334 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.508442 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.508479 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.508523 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvg2r\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-kube-api-access-wvg2r\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.508556 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.609593 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.609985 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.610014 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.610048 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvg2r\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-kube-api-access-wvg2r\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.610075 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.610105 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.610136 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.610184 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.610228 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.610250 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.610270 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.610835 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.612273 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.614020 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.615223 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.616408 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.616887 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.616935 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.617021 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.617321 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.617457 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.633024 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvg2r\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-kube-api-access-wvg2r\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.643465 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:35 crc kubenswrapper[4775]: I1125 19:50:35.801863 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.803390 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.804571 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.808129 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.808137 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-d5ns7" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.809274 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.809818 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.821822 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.839198 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.931671 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c8a9cba-f38d-45fb-8a7e-942f148611ab-config-data-default\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.931847 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c8a9cba-f38d-45fb-8a7e-942f148611ab-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.931890 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84gng\" (UniqueName: \"kubernetes.io/projected/1c8a9cba-f38d-45fb-8a7e-942f148611ab-kube-api-access-84gng\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.933510 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.933571 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8a9cba-f38d-45fb-8a7e-942f148611ab-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.933603 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8a9cba-f38d-45fb-8a7e-942f148611ab-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.933641 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c8a9cba-f38d-45fb-8a7e-942f148611ab-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:36 crc kubenswrapper[4775]: I1125 19:50:36.933709 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c8a9cba-f38d-45fb-8a7e-942f148611ab-kolla-config\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.034952 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c8a9cba-f38d-45fb-8a7e-942f148611ab-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.035002 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84gng\" (UniqueName: \"kubernetes.io/projected/1c8a9cba-f38d-45fb-8a7e-942f148611ab-kube-api-access-84gng\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.035029 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.035045 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8a9cba-f38d-45fb-8a7e-942f148611ab-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.035067 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8a9cba-f38d-45fb-8a7e-942f148611ab-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.035083 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c8a9cba-f38d-45fb-8a7e-942f148611ab-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.035111 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c8a9cba-f38d-45fb-8a7e-942f148611ab-kolla-config\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.035141 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c8a9cba-f38d-45fb-8a7e-942f148611ab-config-data-default\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.036152 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c8a9cba-f38d-45fb-8a7e-942f148611ab-config-data-default\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.037131 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.038323 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c8a9cba-f38d-45fb-8a7e-942f148611ab-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.038323 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c8a9cba-f38d-45fb-8a7e-942f148611ab-kolla-config\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.039480 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c8a9cba-f38d-45fb-8a7e-942f148611ab-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.057129 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8a9cba-f38d-45fb-8a7e-942f148611ab-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.059244 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8a9cba-f38d-45fb-8a7e-942f148611ab-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.064379 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84gng\" (UniqueName: \"kubernetes.io/projected/1c8a9cba-f38d-45fb-8a7e-942f148611ab-kube-api-access-84gng\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.071732 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"1c8a9cba-f38d-45fb-8a7e-942f148611ab\") " pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.135642 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 19:50:37 crc kubenswrapper[4775]: W1125 19:50:37.367331 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a4c8eb9_a366_47bf_9364_a991c7fc9836.slice/crio-8d4b91da7407d01d65295e0c233800560fa4a3171059c8a3e5127a78987d3221 WatchSource:0}: Error finding container 8d4b91da7407d01d65295e0c233800560fa4a3171059c8a3e5127a78987d3221: Status 404 returned error can't find the container with id 8d4b91da7407d01d65295e0c233800560fa4a3171059c8a3e5127a78987d3221 Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.499581 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-md645" event={"ID":"2a4c8eb9-a366-47bf-9364-a991c7fc9836","Type":"ContainerStarted","Data":"8d4b91da7407d01d65295e0c233800560fa4a3171059c8a3e5127a78987d3221"} Nov 25 19:50:37 crc kubenswrapper[4775]: I1125 19:50:37.500320 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-krgs7" event={"ID":"fc392a66-f0ed-43e8-ba93-74f34164ce3f","Type":"ContainerStarted","Data":"972d8f5b4ef14b841c75b9b0690de4d0e1b691b73e968ec085043b5126eca830"} Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.232215 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.233792 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.236377 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.239139 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.239211 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.243124 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-bhsb8" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.248755 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.354474 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.354553 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.354690 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68r4j\" (UniqueName: \"kubernetes.io/projected/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-kube-api-access-68r4j\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.354741 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.354793 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.354825 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.354900 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.354935 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.456414 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.456504 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.456593 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.456628 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.456691 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.456735 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.456842 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68r4j\" (UniqueName: \"kubernetes.io/projected/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-kube-api-access-68r4j\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.457050 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.457093 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.457674 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.457894 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.458269 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.458969 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.466907 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.466976 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.490079 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68r4j\" (UniqueName: \"kubernetes.io/projected/97e9f968-e12b-413d-a36b-7a2f16d0b1ec-kube-api-access-68r4j\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.501757 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"97e9f968-e12b-413d-a36b-7a2f16d0b1ec\") " pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.570875 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.587925 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.591358 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.596894 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-q82rq" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.597058 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.597159 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.598041 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.659041 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb87a80-e2c2-4c52-b2d2-9f4416324624-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.659117 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bdb87a80-e2c2-4c52-b2d2-9f4416324624-config-data\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.659138 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdb87a80-e2c2-4c52-b2d2-9f4416324624-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.659880 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q8vh\" (UniqueName: \"kubernetes.io/projected/bdb87a80-e2c2-4c52-b2d2-9f4416324624-kube-api-access-6q8vh\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.659984 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bdb87a80-e2c2-4c52-b2d2-9f4416324624-kolla-config\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.761508 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bdb87a80-e2c2-4c52-b2d2-9f4416324624-kolla-config\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.761553 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb87a80-e2c2-4c52-b2d2-9f4416324624-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.761603 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bdb87a80-e2c2-4c52-b2d2-9f4416324624-config-data\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.761625 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdb87a80-e2c2-4c52-b2d2-9f4416324624-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.761694 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q8vh\" (UniqueName: \"kubernetes.io/projected/bdb87a80-e2c2-4c52-b2d2-9f4416324624-kube-api-access-6q8vh\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.770043 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.770143 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.770433 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb87a80-e2c2-4c52-b2d2-9f4416324624-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.773452 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bdb87a80-e2c2-4c52-b2d2-9f4416324624-kolla-config\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.773533 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bdb87a80-e2c2-4c52-b2d2-9f4416324624-config-data\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.775978 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q8vh\" (UniqueName: \"kubernetes.io/projected/bdb87a80-e2c2-4c52-b2d2-9f4416324624-kube-api-access-6q8vh\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.776038 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdb87a80-e2c2-4c52-b2d2-9f4416324624-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bdb87a80-e2c2-4c52-b2d2-9f4416324624\") " pod="openstack/memcached-0" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.912047 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-q82rq" Nov 25 19:50:38 crc kubenswrapper[4775]: I1125 19:50:38.922043 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 19:50:40 crc kubenswrapper[4775]: I1125 19:50:40.335522 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 19:50:40 crc kubenswrapper[4775]: I1125 19:50:40.336447 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 19:50:40 crc kubenswrapper[4775]: I1125 19:50:40.349013 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 19:50:40 crc kubenswrapper[4775]: I1125 19:50:40.352633 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-n6gv9" Nov 25 19:50:40 crc kubenswrapper[4775]: I1125 19:50:40.398395 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96hcn\" (UniqueName: \"kubernetes.io/projected/6b6ac464-ee79-41a6-8977-0db9e5044ee9-kube-api-access-96hcn\") pod \"kube-state-metrics-0\" (UID: \"6b6ac464-ee79-41a6-8977-0db9e5044ee9\") " pod="openstack/kube-state-metrics-0" Nov 25 19:50:40 crc kubenswrapper[4775]: I1125 19:50:40.500002 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96hcn\" (UniqueName: \"kubernetes.io/projected/6b6ac464-ee79-41a6-8977-0db9e5044ee9-kube-api-access-96hcn\") pod \"kube-state-metrics-0\" (UID: \"6b6ac464-ee79-41a6-8977-0db9e5044ee9\") " pod="openstack/kube-state-metrics-0" Nov 25 19:50:40 crc kubenswrapper[4775]: I1125 19:50:40.516137 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96hcn\" (UniqueName: \"kubernetes.io/projected/6b6ac464-ee79-41a6-8977-0db9e5044ee9-kube-api-access-96hcn\") pod \"kube-state-metrics-0\" (UID: \"6b6ac464-ee79-41a6-8977-0db9e5044ee9\") " pod="openstack/kube-state-metrics-0" Nov 25 19:50:40 crc kubenswrapper[4775]: I1125 19:50:40.652003 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.682697 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-k9862"] Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.684303 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.687263 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.687528 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jg8s5" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.687815 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.690322 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-ckpwc"] Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.692717 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.697687 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9862"] Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.737247 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ckpwc"] Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.768293 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462d24f9-e5cf-42b4-905e-13fa5f5716fe-combined-ca-bundle\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.768339 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5426\" (UniqueName: \"kubernetes.io/projected/462d24f9-e5cf-42b4-905e-13fa5f5716fe-kube-api-access-j5426\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.768388 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/462d24f9-e5cf-42b4-905e-13fa5f5716fe-var-run-ovn\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.768482 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/462d24f9-e5cf-42b4-905e-13fa5f5716fe-var-log-ovn\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.768524 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/462d24f9-e5cf-42b4-905e-13fa5f5716fe-scripts\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.768547 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/462d24f9-e5cf-42b4-905e-13fa5f5716fe-ovn-controller-tls-certs\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.768613 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/462d24f9-e5cf-42b4-905e-13fa5f5716fe-var-run\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.869954 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5426\" (UniqueName: \"kubernetes.io/projected/462d24f9-e5cf-42b4-905e-13fa5f5716fe-kube-api-access-j5426\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.870025 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c63e79d7-eea0-447e-b944-cd93ce3ebf55-var-log\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.870092 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/462d24f9-e5cf-42b4-905e-13fa5f5716fe-var-run-ovn\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.870119 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c63e79d7-eea0-447e-b944-cd93ce3ebf55-etc-ovs\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.870162 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c63e79d7-eea0-447e-b944-cd93ce3ebf55-var-run\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.870436 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c63e79d7-eea0-447e-b944-cd93ce3ebf55-scripts\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.870510 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/462d24f9-e5cf-42b4-905e-13fa5f5716fe-var-log-ovn\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.870560 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/462d24f9-e5cf-42b4-905e-13fa5f5716fe-scripts\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.870586 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/462d24f9-e5cf-42b4-905e-13fa5f5716fe-ovn-controller-tls-certs\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.870736 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jszr7\" (UniqueName: \"kubernetes.io/projected/c63e79d7-eea0-447e-b944-cd93ce3ebf55-kube-api-access-jszr7\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.870864 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/462d24f9-e5cf-42b4-905e-13fa5f5716fe-var-run\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.870924 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c63e79d7-eea0-447e-b944-cd93ce3ebf55-var-lib\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.870980 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/462d24f9-e5cf-42b4-905e-13fa5f5716fe-var-run-ovn\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.871005 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462d24f9-e5cf-42b4-905e-13fa5f5716fe-combined-ca-bundle\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.871084 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/462d24f9-e5cf-42b4-905e-13fa5f5716fe-var-log-ovn\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.871129 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/462d24f9-e5cf-42b4-905e-13fa5f5716fe-var-run\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.873290 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/462d24f9-e5cf-42b4-905e-13fa5f5716fe-scripts\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.876833 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/462d24f9-e5cf-42b4-905e-13fa5f5716fe-ovn-controller-tls-certs\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.876861 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462d24f9-e5cf-42b4-905e-13fa5f5716fe-combined-ca-bundle\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.889480 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5426\" (UniqueName: \"kubernetes.io/projected/462d24f9-e5cf-42b4-905e-13fa5f5716fe-kube-api-access-j5426\") pod \"ovn-controller-k9862\" (UID: \"462d24f9-e5cf-42b4-905e-13fa5f5716fe\") " pod="openstack/ovn-controller-k9862" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.972375 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c63e79d7-eea0-447e-b944-cd93ce3ebf55-etc-ovs\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.972442 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c63e79d7-eea0-447e-b944-cd93ce3ebf55-var-run\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.972462 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c63e79d7-eea0-447e-b944-cd93ce3ebf55-scripts\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.972510 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jszr7\" (UniqueName: \"kubernetes.io/projected/c63e79d7-eea0-447e-b944-cd93ce3ebf55-kube-api-access-jszr7\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.972542 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c63e79d7-eea0-447e-b944-cd93ce3ebf55-var-lib\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.972586 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c63e79d7-eea0-447e-b944-cd93ce3ebf55-var-log\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.972749 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c63e79d7-eea0-447e-b944-cd93ce3ebf55-var-run\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.972876 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c63e79d7-eea0-447e-b944-cd93ce3ebf55-var-log\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.972877 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c63e79d7-eea0-447e-b944-cd93ce3ebf55-etc-ovs\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.972924 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c63e79d7-eea0-447e-b944-cd93ce3ebf55-var-lib\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.974447 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c63e79d7-eea0-447e-b944-cd93ce3ebf55-scripts\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:43 crc kubenswrapper[4775]: I1125 19:50:43.987497 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jszr7\" (UniqueName: \"kubernetes.io/projected/c63e79d7-eea0-447e-b944-cd93ce3ebf55-kube-api-access-jszr7\") pod \"ovn-controller-ovs-ckpwc\" (UID: \"c63e79d7-eea0-447e-b944-cd93ce3ebf55\") " pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.026227 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9862" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.031902 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.168478 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.173268 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.178598 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tpkp9" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.178966 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.178825 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.178867 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.180825 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.182857 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.277120 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b0bc2f5-2fcc-432c-b9c9-508383732023-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.277457 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b0bc2f5-2fcc-432c-b9c9-508383732023-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.277520 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0bc2f5-2fcc-432c-b9c9-508383732023-config\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.277543 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wff48\" (UniqueName: \"kubernetes.io/projected/2b0bc2f5-2fcc-432c-b9c9-508383732023-kube-api-access-wff48\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.277567 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b0bc2f5-2fcc-432c-b9c9-508383732023-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.277606 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b0bc2f5-2fcc-432c-b9c9-508383732023-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.277627 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0bc2f5-2fcc-432c-b9c9-508383732023-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.277670 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.378931 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0bc2f5-2fcc-432c-b9c9-508383732023-config\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.378970 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wff48\" (UniqueName: \"kubernetes.io/projected/2b0bc2f5-2fcc-432c-b9c9-508383732023-kube-api-access-wff48\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.378989 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b0bc2f5-2fcc-432c-b9c9-508383732023-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.379013 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b0bc2f5-2fcc-432c-b9c9-508383732023-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.379031 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0bc2f5-2fcc-432c-b9c9-508383732023-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.379052 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.379095 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b0bc2f5-2fcc-432c-b9c9-508383732023-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.379167 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b0bc2f5-2fcc-432c-b9c9-508383732023-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.379539 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.379943 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0bc2f5-2fcc-432c-b9c9-508383732023-config\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.379952 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b0bc2f5-2fcc-432c-b9c9-508383732023-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.382520 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b0bc2f5-2fcc-432c-b9c9-508383732023-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.382973 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b0bc2f5-2fcc-432c-b9c9-508383732023-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.382981 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0bc2f5-2fcc-432c-b9c9-508383732023-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.395518 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b0bc2f5-2fcc-432c-b9c9-508383732023-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.402586 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wff48\" (UniqueName: \"kubernetes.io/projected/2b0bc2f5-2fcc-432c-b9c9-508383732023-kube-api-access-wff48\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.415240 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2b0bc2f5-2fcc-432c-b9c9-508383732023\") " pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:44 crc kubenswrapper[4775]: I1125 19:50:44.488981 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 19:50:46 crc kubenswrapper[4775]: E1125 19:50:46.914063 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 19:50:46 crc kubenswrapper[4775]: E1125 19:50:46.914857 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-msm9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-6nnbh_openstack(19fd83c3-34fe-4c70-92ff-3633d221418a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 19:50:46 crc kubenswrapper[4775]: E1125 19:50:46.916060 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-6nnbh" podUID="19fd83c3-34fe-4c70-92ff-3633d221418a" Nov 25 19:50:47 crc kubenswrapper[4775]: E1125 19:50:47.109977 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 19:50:47 crc kubenswrapper[4775]: E1125 19:50:47.110371 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d559t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-nhf6k_openstack(33d9acd4-ae09-4721-b3db-6b4db93325b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 19:50:47 crc kubenswrapper[4775]: E1125 19:50:47.111544 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" podUID="33d9acd4-ae09-4721-b3db-6b4db93325b4" Nov 25 19:50:47 crc kubenswrapper[4775]: I1125 19:50:47.362057 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 19:50:47 crc kubenswrapper[4775]: I1125 19:50:47.502154 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 19:50:47 crc kubenswrapper[4775]: I1125 19:50:47.599491 4775 generic.go:334] "Generic (PLEG): container finished" podID="2a4c8eb9-a366-47bf-9364-a991c7fc9836" containerID="0b3fa344249a2f2219e3b31e73f54fbf5d4db0bf17d6a6950b6140026f734e17" exitCode=0 Nov 25 19:50:47 crc kubenswrapper[4775]: I1125 19:50:47.599601 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-md645" event={"ID":"2a4c8eb9-a366-47bf-9364-a991c7fc9836","Type":"ContainerDied","Data":"0b3fa344249a2f2219e3b31e73f54fbf5d4db0bf17d6a6950b6140026f734e17"} Nov 25 19:50:47 crc kubenswrapper[4775]: I1125 19:50:47.617936 4775 generic.go:334] "Generic (PLEG): container finished" podID="fc392a66-f0ed-43e8-ba93-74f34164ce3f" containerID="d0720f088097541ff4b6655a647f2d9514151ee1dcaddfef6b50a1cc8537ec23" exitCode=0 Nov 25 19:50:47 crc kubenswrapper[4775]: I1125 19:50:47.617991 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-krgs7" event={"ID":"fc392a66-f0ed-43e8-ba93-74f34164ce3f","Type":"ContainerDied","Data":"d0720f088097541ff4b6655a647f2d9514151ee1dcaddfef6b50a1cc8537ec23"} Nov 25 19:50:47 crc kubenswrapper[4775]: I1125 19:50:47.621328 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"50995ab5-ef22-4466-9906-fab208c9a82d","Type":"ContainerStarted","Data":"91fb4c07177e013ec10ef171ef6f315ecb1509843dcc02aab6e4165cd413f88b"} Nov 25 19:50:47 crc kubenswrapper[4775]: I1125 19:50:47.622834 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1c8a9cba-f38d-45fb-8a7e-942f148611ab","Type":"ContainerStarted","Data":"4e1f4806f76e588720db01ac7fe75d7ed40a19b331895b09dae3e2e857901126"} Nov 25 19:50:47 crc kubenswrapper[4775]: I1125 19:50:47.763227 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 19:50:47 crc kubenswrapper[4775]: W1125 19:50:47.781877 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b6ac464_ee79_41a6_8977_0db9e5044ee9.slice/crio-6be61731458cf2b907c96f971ba5a8cfe0fcb3e67e97aedef88af66e9c881677 WatchSource:0}: Error finding container 6be61731458cf2b907c96f971ba5a8cfe0fcb3e67e97aedef88af66e9c881677: Status 404 returned error can't find the container with id 6be61731458cf2b907c96f971ba5a8cfe0fcb3e67e97aedef88af66e9c881677 Nov 25 19:50:47 crc kubenswrapper[4775]: I1125 19:50:47.785436 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 19:50:47 crc kubenswrapper[4775]: W1125 19:50:47.789126 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97e9f968_e12b_413d_a36b_7a2f16d0b1ec.slice/crio-11981a1c60739de41b6087d71bebbc6687eea5b94b5a4e8722977c6111021e79 WatchSource:0}: Error finding container 11981a1c60739de41b6087d71bebbc6687eea5b94b5a4e8722977c6111021e79: Status 404 returned error can't find the container with id 11981a1c60739de41b6087d71bebbc6687eea5b94b5a4e8722977c6111021e79 Nov 25 19:50:47 crc kubenswrapper[4775]: I1125 19:50:47.809833 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9862"] Nov 25 19:50:47 crc kubenswrapper[4775]: I1125 19:50:47.831242 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 19:50:47 crc kubenswrapper[4775]: I1125 19:50:47.896484 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ckpwc"] Nov 25 19:50:47 crc kubenswrapper[4775]: W1125 19:50:47.899304 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc63e79d7_eea0_447e_b944_cd93ce3ebf55.slice/crio-2c42028381145f9fd7cd7011d5f657d5e79911bd575511f6c70419f6d6d357bf WatchSource:0}: Error finding container 2c42028381145f9fd7cd7011d5f657d5e79911bd575511f6c70419f6d6d357bf: Status 404 returned error can't find the container with id 2c42028381145f9fd7cd7011d5f657d5e79911bd575511f6c70419f6d6d357bf Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.031135 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 19:50:48 crc kubenswrapper[4775]: W1125 19:50:48.040901 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdb87a80_e2c2_4c52_b2d2_9f4416324624.slice/crio-426d0e08a0d04251dbd79b914a771ca0aaa4ab93a7496c32b3464eb4d409f77b WatchSource:0}: Error finding container 426d0e08a0d04251dbd79b914a771ca0aaa4ab93a7496c32b3464eb4d409f77b: Status 404 returned error can't find the container with id 426d0e08a0d04251dbd79b914a771ca0aaa4ab93a7496c32b3464eb4d409f77b Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.041764 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.081052 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6nnbh" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.118366 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 19:50:48 crc kubenswrapper[4775]: W1125 19:50:48.127404 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b0bc2f5_2fcc_432c_b9c9_508383732023.slice/crio-dc16031854d5a9f4dab8c93edf642bece74debbb00d9fbdb5eeec659f2840976 WatchSource:0}: Error finding container dc16031854d5a9f4dab8c93edf642bece74debbb00d9fbdb5eeec659f2840976: Status 404 returned error can't find the container with id dc16031854d5a9f4dab8c93edf642bece74debbb00d9fbdb5eeec659f2840976 Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.166731 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d559t\" (UniqueName: \"kubernetes.io/projected/33d9acd4-ae09-4721-b3db-6b4db93325b4-kube-api-access-d559t\") pod \"33d9acd4-ae09-4721-b3db-6b4db93325b4\" (UID: \"33d9acd4-ae09-4721-b3db-6b4db93325b4\") " Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.166870 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33d9acd4-ae09-4721-b3db-6b4db93325b4-config\") pod \"33d9acd4-ae09-4721-b3db-6b4db93325b4\" (UID: \"33d9acd4-ae09-4721-b3db-6b4db93325b4\") " Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.167451 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33d9acd4-ae09-4721-b3db-6b4db93325b4-config" (OuterVolumeSpecName: "config") pod "33d9acd4-ae09-4721-b3db-6b4db93325b4" (UID: "33d9acd4-ae09-4721-b3db-6b4db93325b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.167570 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msm9k\" (UniqueName: \"kubernetes.io/projected/19fd83c3-34fe-4c70-92ff-3633d221418a-kube-api-access-msm9k\") pod \"19fd83c3-34fe-4c70-92ff-3633d221418a\" (UID: \"19fd83c3-34fe-4c70-92ff-3633d221418a\") " Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.167987 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33d9acd4-ae09-4721-b3db-6b4db93325b4-dns-svc\") pod \"33d9acd4-ae09-4721-b3db-6b4db93325b4\" (UID: \"33d9acd4-ae09-4721-b3db-6b4db93325b4\") " Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.168036 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fd83c3-34fe-4c70-92ff-3633d221418a-config\") pod \"19fd83c3-34fe-4c70-92ff-3633d221418a\" (UID: \"19fd83c3-34fe-4c70-92ff-3633d221418a\") " Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.168550 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33d9acd4-ae09-4721-b3db-6b4db93325b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33d9acd4-ae09-4721-b3db-6b4db93325b4" (UID: "33d9acd4-ae09-4721-b3db-6b4db93325b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.168706 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fd83c3-34fe-4c70-92ff-3633d221418a-config" (OuterVolumeSpecName: "config") pod "19fd83c3-34fe-4c70-92ff-3633d221418a" (UID: "19fd83c3-34fe-4c70-92ff-3633d221418a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.168854 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33d9acd4-ae09-4721-b3db-6b4db93325b4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.168876 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fd83c3-34fe-4c70-92ff-3633d221418a-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.168889 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33d9acd4-ae09-4721-b3db-6b4db93325b4-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.172294 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19fd83c3-34fe-4c70-92ff-3633d221418a-kube-api-access-msm9k" (OuterVolumeSpecName: "kube-api-access-msm9k") pod "19fd83c3-34fe-4c70-92ff-3633d221418a" (UID: "19fd83c3-34fe-4c70-92ff-3633d221418a"). InnerVolumeSpecName "kube-api-access-msm9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.172576 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33d9acd4-ae09-4721-b3db-6b4db93325b4-kube-api-access-d559t" (OuterVolumeSpecName: "kube-api-access-d559t") pod "33d9acd4-ae09-4721-b3db-6b4db93325b4" (UID: "33d9acd4-ae09-4721-b3db-6b4db93325b4"). InnerVolumeSpecName "kube-api-access-d559t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.269932 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msm9k\" (UniqueName: \"kubernetes.io/projected/19fd83c3-34fe-4c70-92ff-3633d221418a-kube-api-access-msm9k\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.269966 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d559t\" (UniqueName: \"kubernetes.io/projected/33d9acd4-ae09-4721-b3db-6b4db93325b4-kube-api-access-d559t\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.480095 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.481587 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.484756 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xwxdd" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.484945 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.484947 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.485100 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.491071 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.574085 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx7hk\" (UniqueName: \"kubernetes.io/projected/cd854157-5d64-4744-9065-45b8d7e08c80-kube-api-access-tx7hk\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.574147 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd854157-5d64-4744-9065-45b8d7e08c80-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.574168 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd854157-5d64-4744-9065-45b8d7e08c80-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.574185 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd854157-5d64-4744-9065-45b8d7e08c80-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.574246 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.574428 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cd854157-5d64-4744-9065-45b8d7e08c80-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.574509 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd854157-5d64-4744-9065-45b8d7e08c80-config\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.574546 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd854157-5d64-4744-9065-45b8d7e08c80-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.632058 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-krgs7" event={"ID":"fc392a66-f0ed-43e8-ba93-74f34164ce3f","Type":"ContainerStarted","Data":"758c410385a538a33e1908f45cd0d9741edd42041518028460ea50d3784ed9e2"} Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.632235 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.637251 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b6ac464-ee79-41a6-8977-0db9e5044ee9","Type":"ContainerStarted","Data":"6be61731458cf2b907c96f971ba5a8cfe0fcb3e67e97aedef88af66e9c881677"} Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.638192 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ckpwc" event={"ID":"c63e79d7-eea0-447e-b944-cd93ce3ebf55","Type":"ContainerStarted","Data":"2c42028381145f9fd7cd7011d5f657d5e79911bd575511f6c70419f6d6d357bf"} Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.639991 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6nnbh" event={"ID":"19fd83c3-34fe-4c70-92ff-3633d221418a","Type":"ContainerDied","Data":"a4c1168530c450bec4b9acf9eb5cbacbad0a33d8646ee9dd30a0f80b305f65a6"} Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.640054 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6nnbh" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.644030 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-md645" event={"ID":"2a4c8eb9-a366-47bf-9364-a991c7fc9836","Type":"ContainerStarted","Data":"718241a82a8d1cd2cc133a9f27f3c955ce316c73820018a81885e147df0303fe"} Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.652144 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.655837 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2b0bc2f5-2fcc-432c-b9c9-508383732023","Type":"ContainerStarted","Data":"dc16031854d5a9f4dab8c93edf642bece74debbb00d9fbdb5eeec659f2840976"} Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.659240 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa","Type":"ContainerStarted","Data":"9a69ea773f120d52e00ec117f79b7c94ab792f1d58edc08640f9840bd71b7dbe"} Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.661951 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"97e9f968-e12b-413d-a36b-7a2f16d0b1ec","Type":"ContainerStarted","Data":"11981a1c60739de41b6087d71bebbc6687eea5b94b5a4e8722977c6111021e79"} Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.664035 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9862" event={"ID":"462d24f9-e5cf-42b4-905e-13fa5f5716fe","Type":"ContainerStarted","Data":"d247ad2ddb8f39808ea6d7ec2324310ae17baa135adca85fc1c1e80592e08d1e"} Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.666317 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-krgs7" podStartSLOduration=5.943094545 podStartE2EDuration="15.666294828s" podCreationTimestamp="2025-11-25 19:50:33 +0000 UTC" firstStartedPulling="2025-11-25 19:50:37.366955178 +0000 UTC m=+1019.283317544" lastFinishedPulling="2025-11-25 19:50:47.090155451 +0000 UTC m=+1029.006517827" observedRunningTime="2025-11-25 19:50:48.653848686 +0000 UTC m=+1030.570211052" watchObservedRunningTime="2025-11-25 19:50:48.666294828 +0000 UTC m=+1030.582657194" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.666969 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" event={"ID":"33d9acd4-ae09-4721-b3db-6b4db93325b4","Type":"ContainerDied","Data":"6cf85aba08393435eb0d3b3e54ab19e302c6ac9c5cbb706e193308c3ee16d483"} Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.667016 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nhf6k" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.674408 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bdb87a80-e2c2-4c52-b2d2-9f4416324624","Type":"ContainerStarted","Data":"426d0e08a0d04251dbd79b914a771ca0aaa4ab93a7496c32b3464eb4d409f77b"} Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.679010 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-md645" podStartSLOduration=4.912822036 podStartE2EDuration="14.678994569s" podCreationTimestamp="2025-11-25 19:50:34 +0000 UTC" firstStartedPulling="2025-11-25 19:50:37.370574165 +0000 UTC m=+1019.286936541" lastFinishedPulling="2025-11-25 19:50:47.136746708 +0000 UTC m=+1029.053109074" observedRunningTime="2025-11-25 19:50:48.674824878 +0000 UTC m=+1030.591187244" watchObservedRunningTime="2025-11-25 19:50:48.678994569 +0000 UTC m=+1030.595356935" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.679884 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx7hk\" (UniqueName: \"kubernetes.io/projected/cd854157-5d64-4744-9065-45b8d7e08c80-kube-api-access-tx7hk\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.680031 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd854157-5d64-4744-9065-45b8d7e08c80-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.680089 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd854157-5d64-4744-9065-45b8d7e08c80-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.680122 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd854157-5d64-4744-9065-45b8d7e08c80-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.680154 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.680220 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cd854157-5d64-4744-9065-45b8d7e08c80-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.680268 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd854157-5d64-4744-9065-45b8d7e08c80-config\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.680304 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd854157-5d64-4744-9065-45b8d7e08c80-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.681187 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.684669 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd854157-5d64-4744-9065-45b8d7e08c80-config\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.684695 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cd854157-5d64-4744-9065-45b8d7e08c80-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.685363 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd854157-5d64-4744-9065-45b8d7e08c80-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.688872 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd854157-5d64-4744-9065-45b8d7e08c80-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.693535 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd854157-5d64-4744-9065-45b8d7e08c80-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.707232 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx7hk\" (UniqueName: \"kubernetes.io/projected/cd854157-5d64-4744-9065-45b8d7e08c80-kube-api-access-tx7hk\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.727434 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd854157-5d64-4744-9065-45b8d7e08c80-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.738001 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6nnbh"] Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.740338 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"cd854157-5d64-4744-9065-45b8d7e08c80\") " pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.744246 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6nnbh"] Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.806080 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.812622 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nhf6k"] Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.820399 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nhf6k"] Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.877798 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19fd83c3-34fe-4c70-92ff-3633d221418a" path="/var/lib/kubelet/pods/19fd83c3-34fe-4c70-92ff-3633d221418a/volumes" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.878554 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33d9acd4-ae09-4721-b3db-6b4db93325b4" path="/var/lib/kubelet/pods/33d9acd4-ae09-4721-b3db-6b4db93325b4/volumes" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.903824 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xz2l5"] Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.907162 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.910434 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.923206 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xz2l5"] Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.985854 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/914e5d10-52cd-45c0-8da9-cd0fe095274c-config\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.986174 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914e5d10-52cd-45c0-8da9-cd0fe095274c-combined-ca-bundle\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.986223 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/914e5d10-52cd-45c0-8da9-cd0fe095274c-ovn-rundir\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.986241 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/914e5d10-52cd-45c0-8da9-cd0fe095274c-ovs-rundir\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.986267 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/914e5d10-52cd-45c0-8da9-cd0fe095274c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:48 crc kubenswrapper[4775]: I1125 19:50:48.986322 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4jb9\" (UniqueName: \"kubernetes.io/projected/914e5d10-52cd-45c0-8da9-cd0fe095274c-kube-api-access-r4jb9\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.090613 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/914e5d10-52cd-45c0-8da9-cd0fe095274c-config\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.090718 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914e5d10-52cd-45c0-8da9-cd0fe095274c-combined-ca-bundle\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.090780 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/914e5d10-52cd-45c0-8da9-cd0fe095274c-ovn-rundir\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.090797 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/914e5d10-52cd-45c0-8da9-cd0fe095274c-ovs-rundir\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.090828 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/914e5d10-52cd-45c0-8da9-cd0fe095274c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.090896 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4jb9\" (UniqueName: \"kubernetes.io/projected/914e5d10-52cd-45c0-8da9-cd0fe095274c-kube-api-access-r4jb9\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.091396 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/914e5d10-52cd-45c0-8da9-cd0fe095274c-config\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.091626 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/914e5d10-52cd-45c0-8da9-cd0fe095274c-ovs-rundir\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.091686 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/914e5d10-52cd-45c0-8da9-cd0fe095274c-ovn-rundir\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.104905 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/914e5d10-52cd-45c0-8da9-cd0fe095274c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.105469 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914e5d10-52cd-45c0-8da9-cd0fe095274c-combined-ca-bundle\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.112758 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krgs7"] Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.117488 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4jb9\" (UniqueName: \"kubernetes.io/projected/914e5d10-52cd-45c0-8da9-cd0fe095274c-kube-api-access-r4jb9\") pod \"ovn-controller-metrics-xz2l5\" (UID: \"914e5d10-52cd-45c0-8da9-cd0fe095274c\") " pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.139489 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9s5xw"] Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.141081 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.146289 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.152536 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9s5xw"] Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.277779 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xz2l5" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.294159 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9s5xw\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.294206 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-config\") pod \"dnsmasq-dns-7fd796d7df-9s5xw\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.294260 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9s5xw\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.294280 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td6sp\" (UniqueName: \"kubernetes.io/projected/fcc4981e-da3f-4d8c-a113-79521be59db1-kube-api-access-td6sp\") pod \"dnsmasq-dns-7fd796d7df-9s5xw\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.395387 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9s5xw\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.395690 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-config\") pod \"dnsmasq-dns-7fd796d7df-9s5xw\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.395744 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9s5xw\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.395764 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td6sp\" (UniqueName: \"kubernetes.io/projected/fcc4981e-da3f-4d8c-a113-79521be59db1-kube-api-access-td6sp\") pod \"dnsmasq-dns-7fd796d7df-9s5xw\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.396282 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9s5xw\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.396961 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-config\") pod \"dnsmasq-dns-7fd796d7df-9s5xw\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.396998 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9s5xw\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.412344 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td6sp\" (UniqueName: \"kubernetes.io/projected/fcc4981e-da3f-4d8c-a113-79521be59db1-kube-api-access-td6sp\") pod \"dnsmasq-dns-7fd796d7df-9s5xw\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.487902 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.488367 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 19:50:49 crc kubenswrapper[4775]: W1125 19:50:49.564155 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd854157_5d64_4744_9065_45b8d7e08c80.slice/crio-e88b10297d19ec3a374007dc43191cba3c0f742c3671828b7aad67ef9555be7b WatchSource:0}: Error finding container e88b10297d19ec3a374007dc43191cba3c0f742c3671828b7aad67ef9555be7b: Status 404 returned error can't find the container with id e88b10297d19ec3a374007dc43191cba3c0f742c3671828b7aad67ef9555be7b Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.681788 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"cd854157-5d64-4744-9065-45b8d7e08c80","Type":"ContainerStarted","Data":"e88b10297d19ec3a374007dc43191cba3c0f742c3671828b7aad67ef9555be7b"} Nov 25 19:50:49 crc kubenswrapper[4775]: I1125 19:50:49.718177 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xz2l5"] Nov 25 19:50:50 crc kubenswrapper[4775]: W1125 19:50:50.087037 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod914e5d10_52cd_45c0_8da9_cd0fe095274c.slice/crio-5bd6730d4ce73ad2197add2dab1c939ef814e3513b0bf0238c0bcbe4327f0d6d WatchSource:0}: Error finding container 5bd6730d4ce73ad2197add2dab1c939ef814e3513b0bf0238c0bcbe4327f0d6d: Status 404 returned error can't find the container with id 5bd6730d4ce73ad2197add2dab1c939ef814e3513b0bf0238c0bcbe4327f0d6d Nov 25 19:50:50 crc kubenswrapper[4775]: I1125 19:50:50.689822 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-krgs7" podUID="fc392a66-f0ed-43e8-ba93-74f34164ce3f" containerName="dnsmasq-dns" containerID="cri-o://758c410385a538a33e1908f45cd0d9741edd42041518028460ea50d3784ed9e2" gracePeriod=10 Nov 25 19:50:50 crc kubenswrapper[4775]: I1125 19:50:50.689899 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xz2l5" event={"ID":"914e5d10-52cd-45c0-8da9-cd0fe095274c","Type":"ContainerStarted","Data":"5bd6730d4ce73ad2197add2dab1c939ef814e3513b0bf0238c0bcbe4327f0d6d"} Nov 25 19:50:51 crc kubenswrapper[4775]: I1125 19:50:51.700692 4775 generic.go:334] "Generic (PLEG): container finished" podID="fc392a66-f0ed-43e8-ba93-74f34164ce3f" containerID="758c410385a538a33e1908f45cd0d9741edd42041518028460ea50d3784ed9e2" exitCode=0 Nov 25 19:50:51 crc kubenswrapper[4775]: I1125 19:50:51.700769 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-krgs7" event={"ID":"fc392a66-f0ed-43e8-ba93-74f34164ce3f","Type":"ContainerDied","Data":"758c410385a538a33e1908f45cd0d9741edd42041518028460ea50d3784ed9e2"} Nov 25 19:50:54 crc kubenswrapper[4775]: I1125 19:50:54.620940 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.481601 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.620527 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpnsl\" (UniqueName: \"kubernetes.io/projected/fc392a66-f0ed-43e8-ba93-74f34164ce3f-kube-api-access-zpnsl\") pod \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\" (UID: \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\") " Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.620723 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc392a66-f0ed-43e8-ba93-74f34164ce3f-config\") pod \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\" (UID: \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\") " Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.620784 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc392a66-f0ed-43e8-ba93-74f34164ce3f-dns-svc\") pod \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\" (UID: \"fc392a66-f0ed-43e8-ba93-74f34164ce3f\") " Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.628945 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc392a66-f0ed-43e8-ba93-74f34164ce3f-kube-api-access-zpnsl" (OuterVolumeSpecName: "kube-api-access-zpnsl") pod "fc392a66-f0ed-43e8-ba93-74f34164ce3f" (UID: "fc392a66-f0ed-43e8-ba93-74f34164ce3f"). InnerVolumeSpecName "kube-api-access-zpnsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.676873 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc392a66-f0ed-43e8-ba93-74f34164ce3f-config" (OuterVolumeSpecName: "config") pod "fc392a66-f0ed-43e8-ba93-74f34164ce3f" (UID: "fc392a66-f0ed-43e8-ba93-74f34164ce3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.678179 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc392a66-f0ed-43e8-ba93-74f34164ce3f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc392a66-f0ed-43e8-ba93-74f34164ce3f" (UID: "fc392a66-f0ed-43e8-ba93-74f34164ce3f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.722495 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc392a66-f0ed-43e8-ba93-74f34164ce3f-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.722518 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc392a66-f0ed-43e8-ba93-74f34164ce3f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.722528 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpnsl\" (UniqueName: \"kubernetes.io/projected/fc392a66-f0ed-43e8-ba93-74f34164ce3f-kube-api-access-zpnsl\") on node \"crc\" DevicePath \"\"" Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.730936 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-krgs7" event={"ID":"fc392a66-f0ed-43e8-ba93-74f34164ce3f","Type":"ContainerDied","Data":"972d8f5b4ef14b841c75b9b0690de4d0e1b691b73e968ec085043b5126eca830"} Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.730984 4775 scope.go:117] "RemoveContainer" containerID="758c410385a538a33e1908f45cd0d9741edd42041518028460ea50d3784ed9e2" Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.731415 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-krgs7" Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.763493 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krgs7"] Nov 25 19:50:55 crc kubenswrapper[4775]: I1125 19:50:55.769016 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krgs7"] Nov 25 19:50:56 crc kubenswrapper[4775]: I1125 19:50:56.861093 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc392a66-f0ed-43e8-ba93-74f34164ce3f" path="/var/lib/kubelet/pods/fc392a66-f0ed-43e8-ba93-74f34164ce3f/volumes" Nov 25 19:50:57 crc kubenswrapper[4775]: I1125 19:50:57.673712 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9s5xw"] Nov 25 19:50:57 crc kubenswrapper[4775]: I1125 19:50:57.678927 4775 scope.go:117] "RemoveContainer" containerID="d0720f088097541ff4b6655a647f2d9514151ee1dcaddfef6b50a1cc8537ec23" Nov 25 19:50:58 crc kubenswrapper[4775]: I1125 19:50:58.762879 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" event={"ID":"fcc4981e-da3f-4d8c-a113-79521be59db1","Type":"ContainerStarted","Data":"60c83fe9ead300680abc944f4324dedc00fed991bcfadfcf6fa7d97ffaaf4078"} Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.296758 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-krgs7" podUID="fc392a66-f0ed-43e8-ba93-74f34164ce3f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.95:5353: i/o timeout" Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.779451 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1c8a9cba-f38d-45fb-8a7e-942f148611ab","Type":"ContainerStarted","Data":"747efb043d5f50b6b2c2087d640b7ab23c0f504d3b4fc85b857a0a5378da097c"} Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.781433 4775 generic.go:334] "Generic (PLEG): container finished" podID="fcc4981e-da3f-4d8c-a113-79521be59db1" containerID="8ee73943d080a400edecda3441bb57bf8532a1a507c3dc44f0dd89d914cc8053" exitCode=0 Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.781545 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" event={"ID":"fcc4981e-da3f-4d8c-a113-79521be59db1","Type":"ContainerDied","Data":"8ee73943d080a400edecda3441bb57bf8532a1a507c3dc44f0dd89d914cc8053"} Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.804575 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ckpwc" event={"ID":"c63e79d7-eea0-447e-b944-cd93ce3ebf55","Type":"ContainerStarted","Data":"dce46f83f496e66a6713fc6a704795e635bbfad43811953663b17ed70d416335"} Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.816727 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bdb87a80-e2c2-4c52-b2d2-9f4416324624","Type":"ContainerStarted","Data":"eddcdfab0427ec6bb2521e5cd4b94a0e535d71750712f75e9cfdba7a5fd859ae"} Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.817105 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.833016 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b6ac464-ee79-41a6-8977-0db9e5044ee9","Type":"ContainerStarted","Data":"5b19f049c740c75ce1acb906214c8d9fee261f0adf65c1dc0641463f33a02bdb"} Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.876436 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.883916 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2b0bc2f5-2fcc-432c-b9c9-508383732023","Type":"ContainerStarted","Data":"a1215ac67c164e1c9989fc55b0d11e79565b80414c5008ea77d0177e9e2364b9"} Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.883971 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2b0bc2f5-2fcc-432c-b9c9-508383732023","Type":"ContainerStarted","Data":"8d1811ad1338e822d7f1b42730624b09577cb57d5b9ac931667446af52efb45a"} Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.903425 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.984391246 podStartE2EDuration="21.903408583s" podCreationTimestamp="2025-11-25 19:50:38 +0000 UTC" firstStartedPulling="2025-11-25 19:50:48.042972356 +0000 UTC m=+1029.959334722" lastFinishedPulling="2025-11-25 19:50:56.961989683 +0000 UTC m=+1038.878352059" observedRunningTime="2025-11-25 19:50:59.902805207 +0000 UTC m=+1041.819167613" watchObservedRunningTime="2025-11-25 19:50:59.903408583 +0000 UTC m=+1041.819770949" Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.906902 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"97e9f968-e12b-413d-a36b-7a2f16d0b1ec","Type":"ContainerStarted","Data":"115f7703a140f703c4c7d96bc334620c3f7c3696aacc05b0c2fa9a0b45946e9a"} Nov 25 19:50:59 crc kubenswrapper[4775]: I1125 19:50:59.945456 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.089344808 podStartE2EDuration="19.945426748s" podCreationTimestamp="2025-11-25 19:50:40 +0000 UTC" firstStartedPulling="2025-11-25 19:50:47.784896115 +0000 UTC m=+1029.701258501" lastFinishedPulling="2025-11-25 19:50:58.640978065 +0000 UTC m=+1040.557340441" observedRunningTime="2025-11-25 19:50:59.944075822 +0000 UTC m=+1041.860438188" watchObservedRunningTime="2025-11-25 19:50:59.945426748 +0000 UTC m=+1041.861789114" Nov 25 19:51:00 crc kubenswrapper[4775]: I1125 19:51:00.000546 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.454230559 podStartE2EDuration="17.000525043s" podCreationTimestamp="2025-11-25 19:50:43 +0000 UTC" firstStartedPulling="2025-11-25 19:50:48.132802992 +0000 UTC m=+1030.049165378" lastFinishedPulling="2025-11-25 19:50:57.679097496 +0000 UTC m=+1039.595459862" observedRunningTime="2025-11-25 19:50:59.989883139 +0000 UTC m=+1041.906245515" watchObservedRunningTime="2025-11-25 19:51:00.000525043 +0000 UTC m=+1041.916887409" Nov 25 19:51:00 crc kubenswrapper[4775]: I1125 19:51:00.934699 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9862" event={"ID":"462d24f9-e5cf-42b4-905e-13fa5f5716fe","Type":"ContainerStarted","Data":"1a80df9f27a3a7c90f5f5cf99f2000c8c15755d0da7993a2cdcf021ec10b48cf"} Nov 25 19:51:00 crc kubenswrapper[4775]: I1125 19:51:00.936385 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-k9862" Nov 25 19:51:00 crc kubenswrapper[4775]: I1125 19:51:00.938406 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"50995ab5-ef22-4466-9906-fab208c9a82d","Type":"ContainerStarted","Data":"eca5654027a0fbf8762eb645c694f281694d16185581ba50a6c0f812a8d51bf0"} Nov 25 19:51:00 crc kubenswrapper[4775]: I1125 19:51:00.943501 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xz2l5" event={"ID":"914e5d10-52cd-45c0-8da9-cd0fe095274c","Type":"ContainerStarted","Data":"e566e4d23fdb4555ad6db2f809b5b71c311e44ab5fc901bc3023e7eadb851628"} Nov 25 19:51:00 crc kubenswrapper[4775]: I1125 19:51:00.949351 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"cd854157-5d64-4744-9065-45b8d7e08c80","Type":"ContainerStarted","Data":"e1c869e3184adebb7a5c6134966ff77f29180e332a112f01778f9e48aa474af2"} Nov 25 19:51:00 crc kubenswrapper[4775]: I1125 19:51:00.949411 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"cd854157-5d64-4744-9065-45b8d7e08c80","Type":"ContainerStarted","Data":"e6e2c332521fcb174b05474b7d66f8a7aa56b475f5be24df1e9fe5350189d753"} Nov 25 19:51:00 crc kubenswrapper[4775]: I1125 19:51:00.953617 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa","Type":"ContainerStarted","Data":"70ab4156a953c7a981921faac9a56dd17bc02ce461f66a3d558887ce12c31fec"} Nov 25 19:51:00 crc kubenswrapper[4775]: I1125 19:51:00.956506 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" event={"ID":"fcc4981e-da3f-4d8c-a113-79521be59db1","Type":"ContainerStarted","Data":"fe35da7fa88426445e3367244b3c9d46c54a220c1b6657b02e94bad06ca3a695"} Nov 25 19:51:00 crc kubenswrapper[4775]: I1125 19:51:00.956721 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:51:00 crc kubenswrapper[4775]: I1125 19:51:00.958570 4775 generic.go:334] "Generic (PLEG): container finished" podID="c63e79d7-eea0-447e-b944-cd93ce3ebf55" containerID="dce46f83f496e66a6713fc6a704795e635bbfad43811953663b17ed70d416335" exitCode=0 Nov 25 19:51:00 crc kubenswrapper[4775]: I1125 19:51:00.959281 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ckpwc" event={"ID":"c63e79d7-eea0-447e-b944-cd93ce3ebf55","Type":"ContainerDied","Data":"dce46f83f496e66a6713fc6a704795e635bbfad43811953663b17ed70d416335"} Nov 25 19:51:00 crc kubenswrapper[4775]: I1125 19:51:00.970159 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-k9862" podStartSLOduration=7.3405936050000005 podStartE2EDuration="17.970134729s" podCreationTimestamp="2025-11-25 19:50:43 +0000 UTC" firstStartedPulling="2025-11-25 19:50:47.786149279 +0000 UTC m=+1029.702511655" lastFinishedPulling="2025-11-25 19:50:58.415690413 +0000 UTC m=+1040.332052779" observedRunningTime="2025-11-25 19:51:00.966732929 +0000 UTC m=+1042.883095335" watchObservedRunningTime="2025-11-25 19:51:00.970134729 +0000 UTC m=+1042.886497135" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.034910 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" podStartSLOduration=12.034875993 podStartE2EDuration="12.034875993s" podCreationTimestamp="2025-11-25 19:50:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:51:01.021967097 +0000 UTC m=+1042.938329543" watchObservedRunningTime="2025-11-25 19:51:01.034875993 +0000 UTC m=+1042.951238369" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.074123 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.25785267 podStartE2EDuration="14.074093103s" podCreationTimestamp="2025-11-25 19:50:47 +0000 UTC" firstStartedPulling="2025-11-25 19:50:49.575328543 +0000 UTC m=+1031.491690909" lastFinishedPulling="2025-11-25 19:50:58.391568956 +0000 UTC m=+1040.307931342" observedRunningTime="2025-11-25 19:51:01.055486555 +0000 UTC m=+1042.971848931" watchObservedRunningTime="2025-11-25 19:51:01.074093103 +0000 UTC m=+1042.990455479" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.103789 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xz2l5" podStartSLOduration=4.7152148369999995 podStartE2EDuration="13.103769708s" podCreationTimestamp="2025-11-25 19:50:48 +0000 UTC" firstStartedPulling="2025-11-25 19:50:50.089696607 +0000 UTC m=+1032.006058973" lastFinishedPulling="2025-11-25 19:50:58.478251458 +0000 UTC m=+1040.394613844" observedRunningTime="2025-11-25 19:51:01.082044116 +0000 UTC m=+1042.998406492" watchObservedRunningTime="2025-11-25 19:51:01.103769708 +0000 UTC m=+1043.020132074" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.493776 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9s5xw"] Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.524743 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dtnnl"] Nov 25 19:51:01 crc kubenswrapper[4775]: E1125 19:51:01.525112 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc392a66-f0ed-43e8-ba93-74f34164ce3f" containerName="dnsmasq-dns" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.525135 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc392a66-f0ed-43e8-ba93-74f34164ce3f" containerName="dnsmasq-dns" Nov 25 19:51:01 crc kubenswrapper[4775]: E1125 19:51:01.525163 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc392a66-f0ed-43e8-ba93-74f34164ce3f" containerName="init" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.525171 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc392a66-f0ed-43e8-ba93-74f34164ce3f" containerName="init" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.525362 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc392a66-f0ed-43e8-ba93-74f34164ce3f" containerName="dnsmasq-dns" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.526412 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.528124 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.534879 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dtnnl"] Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.636162 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.636218 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.636244 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spptw\" (UniqueName: \"kubernetes.io/projected/4bfaa898-d953-4c25-b2ca-76fe02819141-kube-api-access-spptw\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.636511 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-config\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.636711 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.738622 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.738695 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.738727 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.738745 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spptw\" (UniqueName: \"kubernetes.io/projected/4bfaa898-d953-4c25-b2ca-76fe02819141-kube-api-access-spptw\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.738797 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-config\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.739780 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.739873 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.740069 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-config\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.740067 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.762338 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spptw\" (UniqueName: \"kubernetes.io/projected/4bfaa898-d953-4c25-b2ca-76fe02819141-kube-api-access-spptw\") pod \"dnsmasq-dns-86db49b7ff-dtnnl\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.840302 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.980471 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ckpwc" event={"ID":"c63e79d7-eea0-447e-b944-cd93ce3ebf55","Type":"ContainerStarted","Data":"625c29494087e184229175d1086a9da923a4af9c92231b8597e96f003189f7b2"} Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.980825 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ckpwc" event={"ID":"c63e79d7-eea0-447e-b944-cd93ce3ebf55","Type":"ContainerStarted","Data":"1a93cd3906ce36f13a7a1db78f1a5a0efe78ea7d36860146880098ca1789e8c0"} Nov 25 19:51:01 crc kubenswrapper[4775]: I1125 19:51:01.982714 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:51:02 crc kubenswrapper[4775]: I1125 19:51:02.005901 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-ckpwc" podStartSLOduration=9.684293128 podStartE2EDuration="19.005885406s" podCreationTimestamp="2025-11-25 19:50:43 +0000 UTC" firstStartedPulling="2025-11-25 19:50:47.901286682 +0000 UTC m=+1029.817649058" lastFinishedPulling="2025-11-25 19:50:57.22287893 +0000 UTC m=+1039.139241336" observedRunningTime="2025-11-25 19:51:02.003756509 +0000 UTC m=+1043.920118875" watchObservedRunningTime="2025-11-25 19:51:02.005885406 +0000 UTC m=+1043.922247772" Nov 25 19:51:02 crc kubenswrapper[4775]: I1125 19:51:02.161557 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dtnnl"] Nov 25 19:51:02 crc kubenswrapper[4775]: I1125 19:51:02.491222 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 25 19:51:02 crc kubenswrapper[4775]: I1125 19:51:02.537911 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 25 19:51:02 crc kubenswrapper[4775]: I1125 19:51:02.994087 4775 generic.go:334] "Generic (PLEG): container finished" podID="4bfaa898-d953-4c25-b2ca-76fe02819141" containerID="8cbd96404e81414f09edcef62255acf6eeea7194d858d69e597b1d52c817574f" exitCode=0 Nov 25 19:51:02 crc kubenswrapper[4775]: I1125 19:51:02.994176 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" event={"ID":"4bfaa898-d953-4c25-b2ca-76fe02819141","Type":"ContainerDied","Data":"8cbd96404e81414f09edcef62255acf6eeea7194d858d69e597b1d52c817574f"} Nov 25 19:51:02 crc kubenswrapper[4775]: I1125 19:51:02.994487 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" event={"ID":"4bfaa898-d953-4c25-b2ca-76fe02819141","Type":"ContainerStarted","Data":"b5055b64a545a3949d13002d472d2c5841528d30f65d5b0f0d1b0b7b004ba6ed"} Nov 25 19:51:02 crc kubenswrapper[4775]: I1125 19:51:02.996382 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" podUID="fcc4981e-da3f-4d8c-a113-79521be59db1" containerName="dnsmasq-dns" containerID="cri-o://fe35da7fa88426445e3367244b3c9d46c54a220c1b6657b02e94bad06ca3a695" gracePeriod=10 Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:02.997607 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:02.998375 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.559536 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.573158 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td6sp\" (UniqueName: \"kubernetes.io/projected/fcc4981e-da3f-4d8c-a113-79521be59db1-kube-api-access-td6sp\") pod \"fcc4981e-da3f-4d8c-a113-79521be59db1\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.573899 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-dns-svc\") pod \"fcc4981e-da3f-4d8c-a113-79521be59db1\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.574076 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-ovsdbserver-nb\") pod \"fcc4981e-da3f-4d8c-a113-79521be59db1\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.574161 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-config\") pod \"fcc4981e-da3f-4d8c-a113-79521be59db1\" (UID: \"fcc4981e-da3f-4d8c-a113-79521be59db1\") " Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.580554 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc4981e-da3f-4d8c-a113-79521be59db1-kube-api-access-td6sp" (OuterVolumeSpecName: "kube-api-access-td6sp") pod "fcc4981e-da3f-4d8c-a113-79521be59db1" (UID: "fcc4981e-da3f-4d8c-a113-79521be59db1"). InnerVolumeSpecName "kube-api-access-td6sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.644779 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fcc4981e-da3f-4d8c-a113-79521be59db1" (UID: "fcc4981e-da3f-4d8c-a113-79521be59db1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.647111 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fcc4981e-da3f-4d8c-a113-79521be59db1" (UID: "fcc4981e-da3f-4d8c-a113-79521be59db1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.655357 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-config" (OuterVolumeSpecName: "config") pod "fcc4981e-da3f-4d8c-a113-79521be59db1" (UID: "fcc4981e-da3f-4d8c-a113-79521be59db1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.675898 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.675928 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.675940 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td6sp\" (UniqueName: \"kubernetes.io/projected/fcc4981e-da3f-4d8c-a113-79521be59db1-kube-api-access-td6sp\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.675954 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcc4981e-da3f-4d8c-a113-79521be59db1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.807316 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.807369 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 25 19:51:03 crc kubenswrapper[4775]: I1125 19:51:03.848644 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.007743 4775 generic.go:334] "Generic (PLEG): container finished" podID="97e9f968-e12b-413d-a36b-7a2f16d0b1ec" containerID="115f7703a140f703c4c7d96bc334620c3f7c3696aacc05b0c2fa9a0b45946e9a" exitCode=0 Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.007844 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"97e9f968-e12b-413d-a36b-7a2f16d0b1ec","Type":"ContainerDied","Data":"115f7703a140f703c4c7d96bc334620c3f7c3696aacc05b0c2fa9a0b45946e9a"} Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.012555 4775 generic.go:334] "Generic (PLEG): container finished" podID="1c8a9cba-f38d-45fb-8a7e-942f148611ab" containerID="747efb043d5f50b6b2c2087d640b7ab23c0f504d3b4fc85b857a0a5378da097c" exitCode=0 Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.012640 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1c8a9cba-f38d-45fb-8a7e-942f148611ab","Type":"ContainerDied","Data":"747efb043d5f50b6b2c2087d640b7ab23c0f504d3b4fc85b857a0a5378da097c"} Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.020464 4775 generic.go:334] "Generic (PLEG): container finished" podID="fcc4981e-da3f-4d8c-a113-79521be59db1" containerID="fe35da7fa88426445e3367244b3c9d46c54a220c1b6657b02e94bad06ca3a695" exitCode=0 Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.020568 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.020586 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" event={"ID":"fcc4981e-da3f-4d8c-a113-79521be59db1","Type":"ContainerDied","Data":"fe35da7fa88426445e3367244b3c9d46c54a220c1b6657b02e94bad06ca3a695"} Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.020632 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9s5xw" event={"ID":"fcc4981e-da3f-4d8c-a113-79521be59db1","Type":"ContainerDied","Data":"60c83fe9ead300680abc944f4324dedc00fed991bcfadfcf6fa7d97ffaaf4078"} Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.020673 4775 scope.go:117] "RemoveContainer" containerID="fe35da7fa88426445e3367244b3c9d46c54a220c1b6657b02e94bad06ca3a695" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.032134 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" event={"ID":"4bfaa898-d953-4c25-b2ca-76fe02819141","Type":"ContainerStarted","Data":"4d49676b9aa807373934d3701e0dd51eb11699ce27ba8f5759aed2b1545c8afb"} Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.114917 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.122432 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" podStartSLOduration=3.122405665 podStartE2EDuration="3.122405665s" podCreationTimestamp="2025-11-25 19:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:51:04.115447959 +0000 UTC m=+1046.031810345" watchObservedRunningTime="2025-11-25 19:51:04.122405665 +0000 UTC m=+1046.038768041" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.124241 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.201640 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9s5xw"] Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.202270 4775 scope.go:117] "RemoveContainer" containerID="8ee73943d080a400edecda3441bb57bf8532a1a507c3dc44f0dd89d914cc8053" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.227220 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9s5xw"] Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.239477 4775 scope.go:117] "RemoveContainer" containerID="fe35da7fa88426445e3367244b3c9d46c54a220c1b6657b02e94bad06ca3a695" Nov 25 19:51:04 crc kubenswrapper[4775]: E1125 19:51:04.242340 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe35da7fa88426445e3367244b3c9d46c54a220c1b6657b02e94bad06ca3a695\": container with ID starting with fe35da7fa88426445e3367244b3c9d46c54a220c1b6657b02e94bad06ca3a695 not found: ID does not exist" containerID="fe35da7fa88426445e3367244b3c9d46c54a220c1b6657b02e94bad06ca3a695" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.242375 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe35da7fa88426445e3367244b3c9d46c54a220c1b6657b02e94bad06ca3a695"} err="failed to get container status \"fe35da7fa88426445e3367244b3c9d46c54a220c1b6657b02e94bad06ca3a695\": rpc error: code = NotFound desc = could not find container \"fe35da7fa88426445e3367244b3c9d46c54a220c1b6657b02e94bad06ca3a695\": container with ID starting with fe35da7fa88426445e3367244b3c9d46c54a220c1b6657b02e94bad06ca3a695 not found: ID does not exist" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.242396 4775 scope.go:117] "RemoveContainer" containerID="8ee73943d080a400edecda3441bb57bf8532a1a507c3dc44f0dd89d914cc8053" Nov 25 19:51:04 crc kubenswrapper[4775]: E1125 19:51:04.244872 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee73943d080a400edecda3441bb57bf8532a1a507c3dc44f0dd89d914cc8053\": container with ID starting with 8ee73943d080a400edecda3441bb57bf8532a1a507c3dc44f0dd89d914cc8053 not found: ID does not exist" containerID="8ee73943d080a400edecda3441bb57bf8532a1a507c3dc44f0dd89d914cc8053" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.244901 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee73943d080a400edecda3441bb57bf8532a1a507c3dc44f0dd89d914cc8053"} err="failed to get container status \"8ee73943d080a400edecda3441bb57bf8532a1a507c3dc44f0dd89d914cc8053\": rpc error: code = NotFound desc = could not find container \"8ee73943d080a400edecda3441bb57bf8532a1a507c3dc44f0dd89d914cc8053\": container with ID starting with 8ee73943d080a400edecda3441bb57bf8532a1a507c3dc44f0dd89d914cc8053 not found: ID does not exist" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.398030 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 25 19:51:04 crc kubenswrapper[4775]: E1125 19:51:04.398624 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc4981e-da3f-4d8c-a113-79521be59db1" containerName="init" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.398763 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc4981e-da3f-4d8c-a113-79521be59db1" containerName="init" Nov 25 19:51:04 crc kubenswrapper[4775]: E1125 19:51:04.398795 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc4981e-da3f-4d8c-a113-79521be59db1" containerName="dnsmasq-dns" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.398802 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc4981e-da3f-4d8c-a113-79521be59db1" containerName="dnsmasq-dns" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.398958 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc4981e-da3f-4d8c-a113-79521be59db1" containerName="dnsmasq-dns" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.399766 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.402436 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.402459 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.402684 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-dx7tc" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.405687 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.408073 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.489122 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsr9z\" (UniqueName: \"kubernetes.io/projected/0a7b999c-f778-4a60-9cad-b00875e7713b-kube-api-access-tsr9z\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.489173 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7b999c-f778-4a60-9cad-b00875e7713b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.489260 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0a7b999c-f778-4a60-9cad-b00875e7713b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.489370 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7b999c-f778-4a60-9cad-b00875e7713b-config\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.489512 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7b999c-f778-4a60-9cad-b00875e7713b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.489603 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7b999c-f778-4a60-9cad-b00875e7713b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.489697 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a7b999c-f778-4a60-9cad-b00875e7713b-scripts\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.590942 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7b999c-f778-4a60-9cad-b00875e7713b-config\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.591005 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7b999c-f778-4a60-9cad-b00875e7713b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.591040 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7b999c-f778-4a60-9cad-b00875e7713b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.591075 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a7b999c-f778-4a60-9cad-b00875e7713b-scripts\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.591109 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsr9z\" (UniqueName: \"kubernetes.io/projected/0a7b999c-f778-4a60-9cad-b00875e7713b-kube-api-access-tsr9z\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.591134 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7b999c-f778-4a60-9cad-b00875e7713b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.591185 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0a7b999c-f778-4a60-9cad-b00875e7713b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.592220 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0a7b999c-f778-4a60-9cad-b00875e7713b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.592479 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7b999c-f778-4a60-9cad-b00875e7713b-config\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.593086 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a7b999c-f778-4a60-9cad-b00875e7713b-scripts\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.595539 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7b999c-f778-4a60-9cad-b00875e7713b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.596274 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7b999c-f778-4a60-9cad-b00875e7713b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.596524 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7b999c-f778-4a60-9cad-b00875e7713b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.609299 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsr9z\" (UniqueName: \"kubernetes.io/projected/0a7b999c-f778-4a60-9cad-b00875e7713b-kube-api-access-tsr9z\") pod \"ovn-northd-0\" (UID: \"0a7b999c-f778-4a60-9cad-b00875e7713b\") " pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.718755 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 19:51:04 crc kubenswrapper[4775]: I1125 19:51:04.861407 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc4981e-da3f-4d8c-a113-79521be59db1" path="/var/lib/kubelet/pods/fcc4981e-da3f-4d8c-a113-79521be59db1/volumes" Nov 25 19:51:05 crc kubenswrapper[4775]: I1125 19:51:05.046639 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1c8a9cba-f38d-45fb-8a7e-942f148611ab","Type":"ContainerStarted","Data":"b62569348b4a32c9da7422b7e1393429ec45b90e1b8e026c1d9c03df19fb4a70"} Nov 25 19:51:05 crc kubenswrapper[4775]: I1125 19:51:05.055127 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"97e9f968-e12b-413d-a36b-7a2f16d0b1ec","Type":"ContainerStarted","Data":"fedaca852db6a90578583711311ccd4b2605dbe48f442abe9c49e4f0bb76123a"} Nov 25 19:51:05 crc kubenswrapper[4775]: I1125 19:51:05.055174 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:05 crc kubenswrapper[4775]: I1125 19:51:05.072970 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=19.03798082 podStartE2EDuration="30.072952911s" podCreationTimestamp="2025-11-25 19:50:35 +0000 UTC" firstStartedPulling="2025-11-25 19:50:47.356937995 +0000 UTC m=+1029.273300361" lastFinishedPulling="2025-11-25 19:50:58.391910056 +0000 UTC m=+1040.308272452" observedRunningTime="2025-11-25 19:51:05.066844347 +0000 UTC m=+1046.983206713" watchObservedRunningTime="2025-11-25 19:51:05.072952911 +0000 UTC m=+1046.989315277" Nov 25 19:51:05 crc kubenswrapper[4775]: I1125 19:51:05.086380 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=18.548167572 podStartE2EDuration="28.08636308s" podCreationTimestamp="2025-11-25 19:50:37 +0000 UTC" firstStartedPulling="2025-11-25 19:50:47.795852379 +0000 UTC m=+1029.712214765" lastFinishedPulling="2025-11-25 19:50:57.334047867 +0000 UTC m=+1039.250410273" observedRunningTime="2025-11-25 19:51:05.082997339 +0000 UTC m=+1046.999359725" watchObservedRunningTime="2025-11-25 19:51:05.08636308 +0000 UTC m=+1047.002725446" Nov 25 19:51:05 crc kubenswrapper[4775]: I1125 19:51:05.206977 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 19:51:06 crc kubenswrapper[4775]: I1125 19:51:06.064745 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0a7b999c-f778-4a60-9cad-b00875e7713b","Type":"ContainerStarted","Data":"6bf285da819ea31019160b303ef6e97a6e35bafa2507d78fc22cfacaf5bc3f19"} Nov 25 19:51:07 crc kubenswrapper[4775]: I1125 19:51:07.137402 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 19:51:07 crc kubenswrapper[4775]: I1125 19:51:07.137806 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 19:51:08 crc kubenswrapper[4775]: I1125 19:51:08.571224 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 19:51:08 crc kubenswrapper[4775]: I1125 19:51:08.571297 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 19:51:08 crc kubenswrapper[4775]: I1125 19:51:08.925075 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 25 19:51:10 crc kubenswrapper[4775]: I1125 19:51:10.655990 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 19:51:11 crc kubenswrapper[4775]: I1125 19:51:11.841958 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:11 crc kubenswrapper[4775]: I1125 19:51:11.918516 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-md645"] Nov 25 19:51:11 crc kubenswrapper[4775]: I1125 19:51:11.918800 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-md645" podUID="2a4c8eb9-a366-47bf-9364-a991c7fc9836" containerName="dnsmasq-dns" containerID="cri-o://718241a82a8d1cd2cc133a9f27f3c955ce316c73820018a81885e147df0303fe" gracePeriod=10 Nov 25 19:51:12 crc kubenswrapper[4775]: I1125 19:51:12.121221 4775 generic.go:334] "Generic (PLEG): container finished" podID="2a4c8eb9-a366-47bf-9364-a991c7fc9836" containerID="718241a82a8d1cd2cc133a9f27f3c955ce316c73820018a81885e147df0303fe" exitCode=0 Nov 25 19:51:12 crc kubenswrapper[4775]: I1125 19:51:12.121422 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-md645" event={"ID":"2a4c8eb9-a366-47bf-9364-a991c7fc9836","Type":"ContainerDied","Data":"718241a82a8d1cd2cc133a9f27f3c955ce316c73820018a81885e147df0303fe"} Nov 25 19:51:12 crc kubenswrapper[4775]: I1125 19:51:12.330586 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:51:12 crc kubenswrapper[4775]: I1125 19:51:12.437434 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a4c8eb9-a366-47bf-9364-a991c7fc9836-dns-svc\") pod \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\" (UID: \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\") " Nov 25 19:51:12 crc kubenswrapper[4775]: I1125 19:51:12.437838 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9nn6\" (UniqueName: \"kubernetes.io/projected/2a4c8eb9-a366-47bf-9364-a991c7fc9836-kube-api-access-b9nn6\") pod \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\" (UID: \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\") " Nov 25 19:51:12 crc kubenswrapper[4775]: I1125 19:51:12.437864 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a4c8eb9-a366-47bf-9364-a991c7fc9836-config\") pod \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\" (UID: \"2a4c8eb9-a366-47bf-9364-a991c7fc9836\") " Nov 25 19:51:12 crc kubenswrapper[4775]: I1125 19:51:12.441874 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a4c8eb9-a366-47bf-9364-a991c7fc9836-kube-api-access-b9nn6" (OuterVolumeSpecName: "kube-api-access-b9nn6") pod "2a4c8eb9-a366-47bf-9364-a991c7fc9836" (UID: "2a4c8eb9-a366-47bf-9364-a991c7fc9836"). InnerVolumeSpecName "kube-api-access-b9nn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:12 crc kubenswrapper[4775]: I1125 19:51:12.477417 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a4c8eb9-a366-47bf-9364-a991c7fc9836-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a4c8eb9-a366-47bf-9364-a991c7fc9836" (UID: "2a4c8eb9-a366-47bf-9364-a991c7fc9836"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:12 crc kubenswrapper[4775]: I1125 19:51:12.477465 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a4c8eb9-a366-47bf-9364-a991c7fc9836-config" (OuterVolumeSpecName: "config") pod "2a4c8eb9-a366-47bf-9364-a991c7fc9836" (UID: "2a4c8eb9-a366-47bf-9364-a991c7fc9836"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:12 crc kubenswrapper[4775]: I1125 19:51:12.540149 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a4c8eb9-a366-47bf-9364-a991c7fc9836-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:12 crc kubenswrapper[4775]: I1125 19:51:12.540183 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a4c8eb9-a366-47bf-9364-a991c7fc9836-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:12 crc kubenswrapper[4775]: I1125 19:51:12.540193 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9nn6\" (UniqueName: \"kubernetes.io/projected/2a4c8eb9-a366-47bf-9364-a991c7fc9836-kube-api-access-b9nn6\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:13 crc kubenswrapper[4775]: I1125 19:51:13.132810 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0a7b999c-f778-4a60-9cad-b00875e7713b","Type":"ContainerStarted","Data":"2ecb97b342543b009e4de89f8424dfd5a52c5b973ac6c599d8c1e08a620f1e2f"} Nov 25 19:51:13 crc kubenswrapper[4775]: I1125 19:51:13.132891 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0a7b999c-f778-4a60-9cad-b00875e7713b","Type":"ContainerStarted","Data":"4d806cad0db8545bdb6764f102787719ed280acb52903b2d518d7ad9e932bb05"} Nov 25 19:51:13 crc kubenswrapper[4775]: I1125 19:51:13.132932 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 25 19:51:13 crc kubenswrapper[4775]: I1125 19:51:13.134817 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-md645" event={"ID":"2a4c8eb9-a366-47bf-9364-a991c7fc9836","Type":"ContainerDied","Data":"8d4b91da7407d01d65295e0c233800560fa4a3171059c8a3e5127a78987d3221"} Nov 25 19:51:13 crc kubenswrapper[4775]: I1125 19:51:13.134848 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-md645" Nov 25 19:51:13 crc kubenswrapper[4775]: I1125 19:51:13.134882 4775 scope.go:117] "RemoveContainer" containerID="718241a82a8d1cd2cc133a9f27f3c955ce316c73820018a81885e147df0303fe" Nov 25 19:51:13 crc kubenswrapper[4775]: I1125 19:51:13.171414 4775 scope.go:117] "RemoveContainer" containerID="0b3fa344249a2f2219e3b31e73f54fbf5d4db0bf17d6a6950b6140026f734e17" Nov 25 19:51:13 crc kubenswrapper[4775]: I1125 19:51:13.190127 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.5799088770000003 podStartE2EDuration="9.190102664s" podCreationTimestamp="2025-11-25 19:51:04 +0000 UTC" firstStartedPulling="2025-11-25 19:51:05.207334539 +0000 UTC m=+1047.123696905" lastFinishedPulling="2025-11-25 19:51:11.817528286 +0000 UTC m=+1053.733890692" observedRunningTime="2025-11-25 19:51:13.168163606 +0000 UTC m=+1055.084526012" watchObservedRunningTime="2025-11-25 19:51:13.190102664 +0000 UTC m=+1055.106465030" Nov 25 19:51:13 crc kubenswrapper[4775]: I1125 19:51:13.197526 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-md645"] Nov 25 19:51:13 crc kubenswrapper[4775]: I1125 19:51:13.203739 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-md645"] Nov 25 19:51:13 crc kubenswrapper[4775]: I1125 19:51:13.267831 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 19:51:13 crc kubenswrapper[4775]: I1125 19:51:13.355619 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 25 19:51:14 crc kubenswrapper[4775]: I1125 19:51:14.862547 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a4c8eb9-a366-47bf-9364-a991c7fc9836" path="/var/lib/kubelet/pods/2a4c8eb9-a366-47bf-9364-a991c7fc9836/volumes" Nov 25 19:51:15 crc kubenswrapper[4775]: I1125 19:51:15.246033 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 19:51:15 crc kubenswrapper[4775]: I1125 19:51:15.307926 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.568893 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7828-account-create-update-r8568"] Nov 25 19:51:18 crc kubenswrapper[4775]: E1125 19:51:18.569779 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4c8eb9-a366-47bf-9364-a991c7fc9836" containerName="dnsmasq-dns" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.569803 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4c8eb9-a366-47bf-9364-a991c7fc9836" containerName="dnsmasq-dns" Nov 25 19:51:18 crc kubenswrapper[4775]: E1125 19:51:18.569831 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4c8eb9-a366-47bf-9364-a991c7fc9836" containerName="init" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.569843 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4c8eb9-a366-47bf-9364-a991c7fc9836" containerName="init" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.570158 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4c8eb9-a366-47bf-9364-a991c7fc9836" containerName="dnsmasq-dns" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.571158 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7828-account-create-update-r8568" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.575055 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.601475 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7828-account-create-update-r8568"] Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.644137 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8s2fd"] Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.645504 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8s2fd" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.653463 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrl5z\" (UniqueName: \"kubernetes.io/projected/316abeb5-c5ff-4da2-b056-0f704b710dc7-kube-api-access-lrl5z\") pod \"keystone-db-create-8s2fd\" (UID: \"316abeb5-c5ff-4da2-b056-0f704b710dc7\") " pod="openstack/keystone-db-create-8s2fd" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.653576 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cpr6\" (UniqueName: \"kubernetes.io/projected/4a5c5668-58e5-4380-a864-1be4be778b9e-kube-api-access-9cpr6\") pod \"keystone-7828-account-create-update-r8568\" (UID: \"4a5c5668-58e5-4380-a864-1be4be778b9e\") " pod="openstack/keystone-7828-account-create-update-r8568" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.653671 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5c5668-58e5-4380-a864-1be4be778b9e-operator-scripts\") pod \"keystone-7828-account-create-update-r8568\" (UID: \"4a5c5668-58e5-4380-a864-1be4be778b9e\") " pod="openstack/keystone-7828-account-create-update-r8568" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.653792 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316abeb5-c5ff-4da2-b056-0f704b710dc7-operator-scripts\") pod \"keystone-db-create-8s2fd\" (UID: \"316abeb5-c5ff-4da2-b056-0f704b710dc7\") " pod="openstack/keystone-db-create-8s2fd" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.654427 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8s2fd"] Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.755101 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrl5z\" (UniqueName: \"kubernetes.io/projected/316abeb5-c5ff-4da2-b056-0f704b710dc7-kube-api-access-lrl5z\") pod \"keystone-db-create-8s2fd\" (UID: \"316abeb5-c5ff-4da2-b056-0f704b710dc7\") " pod="openstack/keystone-db-create-8s2fd" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.755194 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cpr6\" (UniqueName: \"kubernetes.io/projected/4a5c5668-58e5-4380-a864-1be4be778b9e-kube-api-access-9cpr6\") pod \"keystone-7828-account-create-update-r8568\" (UID: \"4a5c5668-58e5-4380-a864-1be4be778b9e\") " pod="openstack/keystone-7828-account-create-update-r8568" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.755256 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5c5668-58e5-4380-a864-1be4be778b9e-operator-scripts\") pod \"keystone-7828-account-create-update-r8568\" (UID: \"4a5c5668-58e5-4380-a864-1be4be778b9e\") " pod="openstack/keystone-7828-account-create-update-r8568" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.755322 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316abeb5-c5ff-4da2-b056-0f704b710dc7-operator-scripts\") pod \"keystone-db-create-8s2fd\" (UID: \"316abeb5-c5ff-4da2-b056-0f704b710dc7\") " pod="openstack/keystone-db-create-8s2fd" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.756304 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316abeb5-c5ff-4da2-b056-0f704b710dc7-operator-scripts\") pod \"keystone-db-create-8s2fd\" (UID: \"316abeb5-c5ff-4da2-b056-0f704b710dc7\") " pod="openstack/keystone-db-create-8s2fd" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.756335 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5c5668-58e5-4380-a864-1be4be778b9e-operator-scripts\") pod \"keystone-7828-account-create-update-r8568\" (UID: \"4a5c5668-58e5-4380-a864-1be4be778b9e\") " pod="openstack/keystone-7828-account-create-update-r8568" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.782891 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrl5z\" (UniqueName: \"kubernetes.io/projected/316abeb5-c5ff-4da2-b056-0f704b710dc7-kube-api-access-lrl5z\") pod \"keystone-db-create-8s2fd\" (UID: \"316abeb5-c5ff-4da2-b056-0f704b710dc7\") " pod="openstack/keystone-db-create-8s2fd" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.785308 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cpr6\" (UniqueName: \"kubernetes.io/projected/4a5c5668-58e5-4380-a864-1be4be778b9e-kube-api-access-9cpr6\") pod \"keystone-7828-account-create-update-r8568\" (UID: \"4a5c5668-58e5-4380-a864-1be4be778b9e\") " pod="openstack/keystone-7828-account-create-update-r8568" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.830768 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-797rb"] Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.832179 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-797rb" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.844300 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-797rb"] Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.857296 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c687c95-b35b-44f3-8db5-a32e1462e604-operator-scripts\") pod \"placement-db-create-797rb\" (UID: \"7c687c95-b35b-44f3-8db5-a32e1462e604\") " pod="openstack/placement-db-create-797rb" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.857512 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w82p8\" (UniqueName: \"kubernetes.io/projected/7c687c95-b35b-44f3-8db5-a32e1462e604-kube-api-access-w82p8\") pod \"placement-db-create-797rb\" (UID: \"7c687c95-b35b-44f3-8db5-a32e1462e604\") " pod="openstack/placement-db-create-797rb" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.899311 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7828-account-create-update-r8568" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.944607 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8fab-account-create-update-l6j8t"] Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.946241 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8fab-account-create-update-l6j8t" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.948738 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.959634 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/327bd993-33a8-41c1-81e5-a3e8d92ff438-operator-scripts\") pod \"placement-8fab-account-create-update-l6j8t\" (UID: \"327bd993-33a8-41c1-81e5-a3e8d92ff438\") " pod="openstack/placement-8fab-account-create-update-l6j8t" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.959730 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8fab-account-create-update-l6j8t"] Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.959831 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c687c95-b35b-44f3-8db5-a32e1462e604-operator-scripts\") pod \"placement-db-create-797rb\" (UID: \"7c687c95-b35b-44f3-8db5-a32e1462e604\") " pod="openstack/placement-db-create-797rb" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.959948 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmp4l\" (UniqueName: \"kubernetes.io/projected/327bd993-33a8-41c1-81e5-a3e8d92ff438-kube-api-access-fmp4l\") pod \"placement-8fab-account-create-update-l6j8t\" (UID: \"327bd993-33a8-41c1-81e5-a3e8d92ff438\") " pod="openstack/placement-8fab-account-create-update-l6j8t" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.960085 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w82p8\" (UniqueName: \"kubernetes.io/projected/7c687c95-b35b-44f3-8db5-a32e1462e604-kube-api-access-w82p8\") pod \"placement-db-create-797rb\" (UID: \"7c687c95-b35b-44f3-8db5-a32e1462e604\") " pod="openstack/placement-db-create-797rb" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.961490 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c687c95-b35b-44f3-8db5-a32e1462e604-operator-scripts\") pod \"placement-db-create-797rb\" (UID: \"7c687c95-b35b-44f3-8db5-a32e1462e604\") " pod="openstack/placement-db-create-797rb" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.971446 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8s2fd" Nov 25 19:51:18 crc kubenswrapper[4775]: I1125 19:51:18.989463 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w82p8\" (UniqueName: \"kubernetes.io/projected/7c687c95-b35b-44f3-8db5-a32e1462e604-kube-api-access-w82p8\") pod \"placement-db-create-797rb\" (UID: \"7c687c95-b35b-44f3-8db5-a32e1462e604\") " pod="openstack/placement-db-create-797rb" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.061667 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/327bd993-33a8-41c1-81e5-a3e8d92ff438-operator-scripts\") pod \"placement-8fab-account-create-update-l6j8t\" (UID: \"327bd993-33a8-41c1-81e5-a3e8d92ff438\") " pod="openstack/placement-8fab-account-create-update-l6j8t" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.061757 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmp4l\" (UniqueName: \"kubernetes.io/projected/327bd993-33a8-41c1-81e5-a3e8d92ff438-kube-api-access-fmp4l\") pod \"placement-8fab-account-create-update-l6j8t\" (UID: \"327bd993-33a8-41c1-81e5-a3e8d92ff438\") " pod="openstack/placement-8fab-account-create-update-l6j8t" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.064413 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/327bd993-33a8-41c1-81e5-a3e8d92ff438-operator-scripts\") pod \"placement-8fab-account-create-update-l6j8t\" (UID: \"327bd993-33a8-41c1-81e5-a3e8d92ff438\") " pod="openstack/placement-8fab-account-create-update-l6j8t" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.080255 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmp4l\" (UniqueName: \"kubernetes.io/projected/327bd993-33a8-41c1-81e5-a3e8d92ff438-kube-api-access-fmp4l\") pod \"placement-8fab-account-create-update-l6j8t\" (UID: \"327bd993-33a8-41c1-81e5-a3e8d92ff438\") " pod="openstack/placement-8fab-account-create-update-l6j8t" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.099344 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-ft6j6"] Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.100683 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ft6j6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.105481 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ft6j6"] Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.163026 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9e08f8-216a-4f56-a8ac-e7f53147c636-operator-scripts\") pod \"glance-db-create-ft6j6\" (UID: \"be9e08f8-216a-4f56-a8ac-e7f53147c636\") " pod="openstack/glance-db-create-ft6j6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.163168 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlfqj\" (UniqueName: \"kubernetes.io/projected/be9e08f8-216a-4f56-a8ac-e7f53147c636-kube-api-access-rlfqj\") pod \"glance-db-create-ft6j6\" (UID: \"be9e08f8-216a-4f56-a8ac-e7f53147c636\") " pod="openstack/glance-db-create-ft6j6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.167808 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-797rb" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.241352 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-22a0-account-create-update-fxms6"] Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.242242 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-22a0-account-create-update-fxms6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.246274 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.254589 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-22a0-account-create-update-fxms6"] Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.267032 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlfqj\" (UniqueName: \"kubernetes.io/projected/be9e08f8-216a-4f56-a8ac-e7f53147c636-kube-api-access-rlfqj\") pod \"glance-db-create-ft6j6\" (UID: \"be9e08f8-216a-4f56-a8ac-e7f53147c636\") " pod="openstack/glance-db-create-ft6j6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.267192 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpnhl\" (UniqueName: \"kubernetes.io/projected/55b42817-fba4-49ac-9215-3efbb04f0ef9-kube-api-access-xpnhl\") pod \"glance-22a0-account-create-update-fxms6\" (UID: \"55b42817-fba4-49ac-9215-3efbb04f0ef9\") " pod="openstack/glance-22a0-account-create-update-fxms6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.267258 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9e08f8-216a-4f56-a8ac-e7f53147c636-operator-scripts\") pod \"glance-db-create-ft6j6\" (UID: \"be9e08f8-216a-4f56-a8ac-e7f53147c636\") " pod="openstack/glance-db-create-ft6j6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.267306 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b42817-fba4-49ac-9215-3efbb04f0ef9-operator-scripts\") pod \"glance-22a0-account-create-update-fxms6\" (UID: \"55b42817-fba4-49ac-9215-3efbb04f0ef9\") " pod="openstack/glance-22a0-account-create-update-fxms6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.270407 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9e08f8-216a-4f56-a8ac-e7f53147c636-operator-scripts\") pod \"glance-db-create-ft6j6\" (UID: \"be9e08f8-216a-4f56-a8ac-e7f53147c636\") " pod="openstack/glance-db-create-ft6j6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.289690 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlfqj\" (UniqueName: \"kubernetes.io/projected/be9e08f8-216a-4f56-a8ac-e7f53147c636-kube-api-access-rlfqj\") pod \"glance-db-create-ft6j6\" (UID: \"be9e08f8-216a-4f56-a8ac-e7f53147c636\") " pod="openstack/glance-db-create-ft6j6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.335269 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8fab-account-create-update-l6j8t" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.368544 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b42817-fba4-49ac-9215-3efbb04f0ef9-operator-scripts\") pod \"glance-22a0-account-create-update-fxms6\" (UID: \"55b42817-fba4-49ac-9215-3efbb04f0ef9\") " pod="openstack/glance-22a0-account-create-update-fxms6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.368991 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpnhl\" (UniqueName: \"kubernetes.io/projected/55b42817-fba4-49ac-9215-3efbb04f0ef9-kube-api-access-xpnhl\") pod \"glance-22a0-account-create-update-fxms6\" (UID: \"55b42817-fba4-49ac-9215-3efbb04f0ef9\") " pod="openstack/glance-22a0-account-create-update-fxms6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.369579 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b42817-fba4-49ac-9215-3efbb04f0ef9-operator-scripts\") pod \"glance-22a0-account-create-update-fxms6\" (UID: \"55b42817-fba4-49ac-9215-3efbb04f0ef9\") " pod="openstack/glance-22a0-account-create-update-fxms6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.390530 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpnhl\" (UniqueName: \"kubernetes.io/projected/55b42817-fba4-49ac-9215-3efbb04f0ef9-kube-api-access-xpnhl\") pod \"glance-22a0-account-create-update-fxms6\" (UID: \"55b42817-fba4-49ac-9215-3efbb04f0ef9\") " pod="openstack/glance-22a0-account-create-update-fxms6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.402603 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7828-account-create-update-r8568"] Nov 25 19:51:19 crc kubenswrapper[4775]: W1125 19:51:19.405446 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a5c5668_58e5_4380_a864_1be4be778b9e.slice/crio-07e99c0cba4260fa9eeb9ba9ca022d98374126e364238b614f6aa6508e579253 WatchSource:0}: Error finding container 07e99c0cba4260fa9eeb9ba9ca022d98374126e364238b614f6aa6508e579253: Status 404 returned error can't find the container with id 07e99c0cba4260fa9eeb9ba9ca022d98374126e364238b614f6aa6508e579253 Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.426955 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ft6j6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.480044 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8s2fd"] Nov 25 19:51:19 crc kubenswrapper[4775]: W1125 19:51:19.500091 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod316abeb5_c5ff_4da2_b056_0f704b710dc7.slice/crio-b3dc005cb1dc9fb324f0ee3e11c0f8fd82dc81546539dd7662ac458d9251e641 WatchSource:0}: Error finding container b3dc005cb1dc9fb324f0ee3e11c0f8fd82dc81546539dd7662ac458d9251e641: Status 404 returned error can't find the container with id b3dc005cb1dc9fb324f0ee3e11c0f8fd82dc81546539dd7662ac458d9251e641 Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.560476 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-22a0-account-create-update-fxms6" Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.577484 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8fab-account-create-update-l6j8t"] Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.610607 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-797rb"] Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.862843 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ft6j6"] Nov 25 19:51:19 crc kubenswrapper[4775]: W1125 19:51:19.885334 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe9e08f8_216a_4f56_a8ac_e7f53147c636.slice/crio-a0cc6b0843d84e15b5da9f25ee896d77f9f28e5aab89fc3f3785f68ad3e26f54 WatchSource:0}: Error finding container a0cc6b0843d84e15b5da9f25ee896d77f9f28e5aab89fc3f3785f68ad3e26f54: Status 404 returned error can't find the container with id a0cc6b0843d84e15b5da9f25ee896d77f9f28e5aab89fc3f3785f68ad3e26f54 Nov 25 19:51:19 crc kubenswrapper[4775]: I1125 19:51:19.984115 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-22a0-account-create-update-fxms6"] Nov 25 19:51:19 crc kubenswrapper[4775]: W1125 19:51:19.993144 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55b42817_fba4_49ac_9215_3efbb04f0ef9.slice/crio-928e5f473bdefd0d8f43dabcb69f8b3ea505e36b0a12b9489f1ed12c56f91332 WatchSource:0}: Error finding container 928e5f473bdefd0d8f43dabcb69f8b3ea505e36b0a12b9489f1ed12c56f91332: Status 404 returned error can't find the container with id 928e5f473bdefd0d8f43dabcb69f8b3ea505e36b0a12b9489f1ed12c56f91332 Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.207395 4775 generic.go:334] "Generic (PLEG): container finished" podID="4a5c5668-58e5-4380-a864-1be4be778b9e" containerID="f78da86ad06f55f67b274ca1d529925f0a9a607bb3d1fb8fafdcda6981d731be" exitCode=0 Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.207530 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7828-account-create-update-r8568" event={"ID":"4a5c5668-58e5-4380-a864-1be4be778b9e","Type":"ContainerDied","Data":"f78da86ad06f55f67b274ca1d529925f0a9a607bb3d1fb8fafdcda6981d731be"} Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.207573 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7828-account-create-update-r8568" event={"ID":"4a5c5668-58e5-4380-a864-1be4be778b9e","Type":"ContainerStarted","Data":"07e99c0cba4260fa9eeb9ba9ca022d98374126e364238b614f6aa6508e579253"} Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.212453 4775 generic.go:334] "Generic (PLEG): container finished" podID="327bd993-33a8-41c1-81e5-a3e8d92ff438" containerID="24254b492ee99ddad75b5b63d9136ba70d585a419c6b70d69d7732ae2ce33613" exitCode=0 Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.212533 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8fab-account-create-update-l6j8t" event={"ID":"327bd993-33a8-41c1-81e5-a3e8d92ff438","Type":"ContainerDied","Data":"24254b492ee99ddad75b5b63d9136ba70d585a419c6b70d69d7732ae2ce33613"} Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.212611 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8fab-account-create-update-l6j8t" event={"ID":"327bd993-33a8-41c1-81e5-a3e8d92ff438","Type":"ContainerStarted","Data":"9c36d7e9255d59afa3a09dc7a1a684fdc5083fec9e23d9780eea300532a5ffbf"} Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.217547 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ft6j6" event={"ID":"be9e08f8-216a-4f56-a8ac-e7f53147c636","Type":"ContainerStarted","Data":"a0cc6b0843d84e15b5da9f25ee896d77f9f28e5aab89fc3f3785f68ad3e26f54"} Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.219543 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-22a0-account-create-update-fxms6" event={"ID":"55b42817-fba4-49ac-9215-3efbb04f0ef9","Type":"ContainerStarted","Data":"928e5f473bdefd0d8f43dabcb69f8b3ea505e36b0a12b9489f1ed12c56f91332"} Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.235737 4775 generic.go:334] "Generic (PLEG): container finished" podID="316abeb5-c5ff-4da2-b056-0f704b710dc7" containerID="ec3b18e5aa5a42fcc4d9ef9256002ec922ed720d2e06f54c196ffe8d06cba268" exitCode=0 Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.235841 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8s2fd" event={"ID":"316abeb5-c5ff-4da2-b056-0f704b710dc7","Type":"ContainerDied","Data":"ec3b18e5aa5a42fcc4d9ef9256002ec922ed720d2e06f54c196ffe8d06cba268"} Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.235881 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8s2fd" event={"ID":"316abeb5-c5ff-4da2-b056-0f704b710dc7","Type":"ContainerStarted","Data":"b3dc005cb1dc9fb324f0ee3e11c0f8fd82dc81546539dd7662ac458d9251e641"} Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.239915 4775 generic.go:334] "Generic (PLEG): container finished" podID="7c687c95-b35b-44f3-8db5-a32e1462e604" containerID="3b6df1bd0d69df7256523aa461978d9de3034cafa16585c1acf23d82019e0efa" exitCode=0 Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.239986 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-797rb" event={"ID":"7c687c95-b35b-44f3-8db5-a32e1462e604","Type":"ContainerDied","Data":"3b6df1bd0d69df7256523aa461978d9de3034cafa16585c1acf23d82019e0efa"} Nov 25 19:51:20 crc kubenswrapper[4775]: I1125 19:51:20.240027 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-797rb" event={"ID":"7c687c95-b35b-44f3-8db5-a32e1462e604","Type":"ContainerStarted","Data":"313d570cb467d44f6e98505674dbaae08c4d69796eedd618a38d437aa41482e3"} Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.253690 4775 generic.go:334] "Generic (PLEG): container finished" podID="be9e08f8-216a-4f56-a8ac-e7f53147c636" containerID="9814e148cbe862425145b65aa626c1950d34122682edfb4d9d366fc157703291" exitCode=0 Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.253796 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ft6j6" event={"ID":"be9e08f8-216a-4f56-a8ac-e7f53147c636","Type":"ContainerDied","Data":"9814e148cbe862425145b65aa626c1950d34122682edfb4d9d366fc157703291"} Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.257061 4775 generic.go:334] "Generic (PLEG): container finished" podID="55b42817-fba4-49ac-9215-3efbb04f0ef9" containerID="b10a9993defbe80f82fc1c229471177497a1a1742f5cb7997732598ef7a8622a" exitCode=0 Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.257216 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-22a0-account-create-update-fxms6" event={"ID":"55b42817-fba4-49ac-9215-3efbb04f0ef9","Type":"ContainerDied","Data":"b10a9993defbe80f82fc1c229471177497a1a1742f5cb7997732598ef7a8622a"} Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.670947 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-797rb" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.713573 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c687c95-b35b-44f3-8db5-a32e1462e604-operator-scripts\") pod \"7c687c95-b35b-44f3-8db5-a32e1462e604\" (UID: \"7c687c95-b35b-44f3-8db5-a32e1462e604\") " Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.713862 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w82p8\" (UniqueName: \"kubernetes.io/projected/7c687c95-b35b-44f3-8db5-a32e1462e604-kube-api-access-w82p8\") pod \"7c687c95-b35b-44f3-8db5-a32e1462e604\" (UID: \"7c687c95-b35b-44f3-8db5-a32e1462e604\") " Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.714453 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c687c95-b35b-44f3-8db5-a32e1462e604-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c687c95-b35b-44f3-8db5-a32e1462e604" (UID: "7c687c95-b35b-44f3-8db5-a32e1462e604"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.724854 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c687c95-b35b-44f3-8db5-a32e1462e604-kube-api-access-w82p8" (OuterVolumeSpecName: "kube-api-access-w82p8") pod "7c687c95-b35b-44f3-8db5-a32e1462e604" (UID: "7c687c95-b35b-44f3-8db5-a32e1462e604"). InnerVolumeSpecName "kube-api-access-w82p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.798999 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8fab-account-create-update-l6j8t" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.806907 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7828-account-create-update-r8568" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.815522 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/327bd993-33a8-41c1-81e5-a3e8d92ff438-operator-scripts\") pod \"327bd993-33a8-41c1-81e5-a3e8d92ff438\" (UID: \"327bd993-33a8-41c1-81e5-a3e8d92ff438\") " Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.816039 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmp4l\" (UniqueName: \"kubernetes.io/projected/327bd993-33a8-41c1-81e5-a3e8d92ff438-kube-api-access-fmp4l\") pod \"327bd993-33a8-41c1-81e5-a3e8d92ff438\" (UID: \"327bd993-33a8-41c1-81e5-a3e8d92ff438\") " Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.817046 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w82p8\" (UniqueName: \"kubernetes.io/projected/7c687c95-b35b-44f3-8db5-a32e1462e604-kube-api-access-w82p8\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.817129 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c687c95-b35b-44f3-8db5-a32e1462e604-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.819992 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/327bd993-33a8-41c1-81e5-a3e8d92ff438-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "327bd993-33a8-41c1-81e5-a3e8d92ff438" (UID: "327bd993-33a8-41c1-81e5-a3e8d92ff438"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.820886 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/327bd993-33a8-41c1-81e5-a3e8d92ff438-kube-api-access-fmp4l" (OuterVolumeSpecName: "kube-api-access-fmp4l") pod "327bd993-33a8-41c1-81e5-a3e8d92ff438" (UID: "327bd993-33a8-41c1-81e5-a3e8d92ff438"). InnerVolumeSpecName "kube-api-access-fmp4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.823342 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8s2fd" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.917627 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5c5668-58e5-4380-a864-1be4be778b9e-operator-scripts\") pod \"4a5c5668-58e5-4380-a864-1be4be778b9e\" (UID: \"4a5c5668-58e5-4380-a864-1be4be778b9e\") " Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.917734 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cpr6\" (UniqueName: \"kubernetes.io/projected/4a5c5668-58e5-4380-a864-1be4be778b9e-kube-api-access-9cpr6\") pod \"4a5c5668-58e5-4380-a864-1be4be778b9e\" (UID: \"4a5c5668-58e5-4380-a864-1be4be778b9e\") " Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.917763 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrl5z\" (UniqueName: \"kubernetes.io/projected/316abeb5-c5ff-4da2-b056-0f704b710dc7-kube-api-access-lrl5z\") pod \"316abeb5-c5ff-4da2-b056-0f704b710dc7\" (UID: \"316abeb5-c5ff-4da2-b056-0f704b710dc7\") " Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.917792 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316abeb5-c5ff-4da2-b056-0f704b710dc7-operator-scripts\") pod \"316abeb5-c5ff-4da2-b056-0f704b710dc7\" (UID: \"316abeb5-c5ff-4da2-b056-0f704b710dc7\") " Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.918046 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/327bd993-33a8-41c1-81e5-a3e8d92ff438-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.918064 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmp4l\" (UniqueName: \"kubernetes.io/projected/327bd993-33a8-41c1-81e5-a3e8d92ff438-kube-api-access-fmp4l\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.918398 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/316abeb5-c5ff-4da2-b056-0f704b710dc7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "316abeb5-c5ff-4da2-b056-0f704b710dc7" (UID: "316abeb5-c5ff-4da2-b056-0f704b710dc7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.918397 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a5c5668-58e5-4380-a864-1be4be778b9e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a5c5668-58e5-4380-a864-1be4be778b9e" (UID: "4a5c5668-58e5-4380-a864-1be4be778b9e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.920581 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a5c5668-58e5-4380-a864-1be4be778b9e-kube-api-access-9cpr6" (OuterVolumeSpecName: "kube-api-access-9cpr6") pod "4a5c5668-58e5-4380-a864-1be4be778b9e" (UID: "4a5c5668-58e5-4380-a864-1be4be778b9e"). InnerVolumeSpecName "kube-api-access-9cpr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:21 crc kubenswrapper[4775]: I1125 19:51:21.921867 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316abeb5-c5ff-4da2-b056-0f704b710dc7-kube-api-access-lrl5z" (OuterVolumeSpecName: "kube-api-access-lrl5z") pod "316abeb5-c5ff-4da2-b056-0f704b710dc7" (UID: "316abeb5-c5ff-4da2-b056-0f704b710dc7"). InnerVolumeSpecName "kube-api-access-lrl5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.019795 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cpr6\" (UniqueName: \"kubernetes.io/projected/4a5c5668-58e5-4380-a864-1be4be778b9e-kube-api-access-9cpr6\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.019853 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrl5z\" (UniqueName: \"kubernetes.io/projected/316abeb5-c5ff-4da2-b056-0f704b710dc7-kube-api-access-lrl5z\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.019874 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316abeb5-c5ff-4da2-b056-0f704b710dc7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.019895 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5c5668-58e5-4380-a864-1be4be778b9e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.283684 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-797rb" event={"ID":"7c687c95-b35b-44f3-8db5-a32e1462e604","Type":"ContainerDied","Data":"313d570cb467d44f6e98505674dbaae08c4d69796eedd618a38d437aa41482e3"} Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.283744 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="313d570cb467d44f6e98505674dbaae08c4d69796eedd618a38d437aa41482e3" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.283843 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-797rb" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.288782 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7828-account-create-update-r8568" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.288799 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7828-account-create-update-r8568" event={"ID":"4a5c5668-58e5-4380-a864-1be4be778b9e","Type":"ContainerDied","Data":"07e99c0cba4260fa9eeb9ba9ca022d98374126e364238b614f6aa6508e579253"} Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.288844 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07e99c0cba4260fa9eeb9ba9ca022d98374126e364238b614f6aa6508e579253" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.291973 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8fab-account-create-update-l6j8t" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.291954 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8fab-account-create-update-l6j8t" event={"ID":"327bd993-33a8-41c1-81e5-a3e8d92ff438","Type":"ContainerDied","Data":"9c36d7e9255d59afa3a09dc7a1a684fdc5083fec9e23d9780eea300532a5ffbf"} Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.292276 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c36d7e9255d59afa3a09dc7a1a684fdc5083fec9e23d9780eea300532a5ffbf" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.295069 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8s2fd" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.295394 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8s2fd" event={"ID":"316abeb5-c5ff-4da2-b056-0f704b710dc7","Type":"ContainerDied","Data":"b3dc005cb1dc9fb324f0ee3e11c0f8fd82dc81546539dd7662ac458d9251e641"} Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.295433 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3dc005cb1dc9fb324f0ee3e11c0f8fd82dc81546539dd7662ac458d9251e641" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.753399 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ft6j6" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.758238 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-22a0-account-create-update-fxms6" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.831474 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9e08f8-216a-4f56-a8ac-e7f53147c636-operator-scripts\") pod \"be9e08f8-216a-4f56-a8ac-e7f53147c636\" (UID: \"be9e08f8-216a-4f56-a8ac-e7f53147c636\") " Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.831610 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpnhl\" (UniqueName: \"kubernetes.io/projected/55b42817-fba4-49ac-9215-3efbb04f0ef9-kube-api-access-xpnhl\") pod \"55b42817-fba4-49ac-9215-3efbb04f0ef9\" (UID: \"55b42817-fba4-49ac-9215-3efbb04f0ef9\") " Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.831641 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b42817-fba4-49ac-9215-3efbb04f0ef9-operator-scripts\") pod \"55b42817-fba4-49ac-9215-3efbb04f0ef9\" (UID: \"55b42817-fba4-49ac-9215-3efbb04f0ef9\") " Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.831710 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlfqj\" (UniqueName: \"kubernetes.io/projected/be9e08f8-216a-4f56-a8ac-e7f53147c636-kube-api-access-rlfqj\") pod \"be9e08f8-216a-4f56-a8ac-e7f53147c636\" (UID: \"be9e08f8-216a-4f56-a8ac-e7f53147c636\") " Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.832698 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b42817-fba4-49ac-9215-3efbb04f0ef9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55b42817-fba4-49ac-9215-3efbb04f0ef9" (UID: "55b42817-fba4-49ac-9215-3efbb04f0ef9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.832796 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b42817-fba4-49ac-9215-3efbb04f0ef9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.832907 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be9e08f8-216a-4f56-a8ac-e7f53147c636-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be9e08f8-216a-4f56-a8ac-e7f53147c636" (UID: "be9e08f8-216a-4f56-a8ac-e7f53147c636"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.836818 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9e08f8-216a-4f56-a8ac-e7f53147c636-kube-api-access-rlfqj" (OuterVolumeSpecName: "kube-api-access-rlfqj") pod "be9e08f8-216a-4f56-a8ac-e7f53147c636" (UID: "be9e08f8-216a-4f56-a8ac-e7f53147c636"). InnerVolumeSpecName "kube-api-access-rlfqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.839516 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b42817-fba4-49ac-9215-3efbb04f0ef9-kube-api-access-xpnhl" (OuterVolumeSpecName: "kube-api-access-xpnhl") pod "55b42817-fba4-49ac-9215-3efbb04f0ef9" (UID: "55b42817-fba4-49ac-9215-3efbb04f0ef9"). InnerVolumeSpecName "kube-api-access-xpnhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.934891 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpnhl\" (UniqueName: \"kubernetes.io/projected/55b42817-fba4-49ac-9215-3efbb04f0ef9-kube-api-access-xpnhl\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.935052 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlfqj\" (UniqueName: \"kubernetes.io/projected/be9e08f8-216a-4f56-a8ac-e7f53147c636-kube-api-access-rlfqj\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:22 crc kubenswrapper[4775]: I1125 19:51:22.935084 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9e08f8-216a-4f56-a8ac-e7f53147c636-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:23 crc kubenswrapper[4775]: I1125 19:51:23.307113 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ft6j6" Nov 25 19:51:23 crc kubenswrapper[4775]: I1125 19:51:23.307147 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ft6j6" event={"ID":"be9e08f8-216a-4f56-a8ac-e7f53147c636","Type":"ContainerDied","Data":"a0cc6b0843d84e15b5da9f25ee896d77f9f28e5aab89fc3f3785f68ad3e26f54"} Nov 25 19:51:23 crc kubenswrapper[4775]: I1125 19:51:23.307223 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0cc6b0843d84e15b5da9f25ee896d77f9f28e5aab89fc3f3785f68ad3e26f54" Nov 25 19:51:23 crc kubenswrapper[4775]: I1125 19:51:23.309115 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-22a0-account-create-update-fxms6" event={"ID":"55b42817-fba4-49ac-9215-3efbb04f0ef9","Type":"ContainerDied","Data":"928e5f473bdefd0d8f43dabcb69f8b3ea505e36b0a12b9489f1ed12c56f91332"} Nov 25 19:51:23 crc kubenswrapper[4775]: I1125 19:51:23.309168 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="928e5f473bdefd0d8f43dabcb69f8b3ea505e36b0a12b9489f1ed12c56f91332" Nov 25 19:51:23 crc kubenswrapper[4775]: I1125 19:51:23.309402 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-22a0-account-create-update-fxms6" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.516898 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-n669j"] Nov 25 19:51:24 crc kubenswrapper[4775]: E1125 19:51:24.517477 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316abeb5-c5ff-4da2-b056-0f704b710dc7" containerName="mariadb-database-create" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.517488 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="316abeb5-c5ff-4da2-b056-0f704b710dc7" containerName="mariadb-database-create" Nov 25 19:51:24 crc kubenswrapper[4775]: E1125 19:51:24.517505 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="327bd993-33a8-41c1-81e5-a3e8d92ff438" containerName="mariadb-account-create-update" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.517511 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="327bd993-33a8-41c1-81e5-a3e8d92ff438" containerName="mariadb-account-create-update" Nov 25 19:51:24 crc kubenswrapper[4775]: E1125 19:51:24.517525 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be9e08f8-216a-4f56-a8ac-e7f53147c636" containerName="mariadb-database-create" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.517532 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="be9e08f8-216a-4f56-a8ac-e7f53147c636" containerName="mariadb-database-create" Nov 25 19:51:24 crc kubenswrapper[4775]: E1125 19:51:24.517546 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b42817-fba4-49ac-9215-3efbb04f0ef9" containerName="mariadb-account-create-update" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.517552 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b42817-fba4-49ac-9215-3efbb04f0ef9" containerName="mariadb-account-create-update" Nov 25 19:51:24 crc kubenswrapper[4775]: E1125 19:51:24.517567 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5c5668-58e5-4380-a864-1be4be778b9e" containerName="mariadb-account-create-update" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.517572 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5c5668-58e5-4380-a864-1be4be778b9e" containerName="mariadb-account-create-update" Nov 25 19:51:24 crc kubenswrapper[4775]: E1125 19:51:24.517587 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c687c95-b35b-44f3-8db5-a32e1462e604" containerName="mariadb-database-create" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.517592 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c687c95-b35b-44f3-8db5-a32e1462e604" containerName="mariadb-database-create" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.517797 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="316abeb5-c5ff-4da2-b056-0f704b710dc7" containerName="mariadb-database-create" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.517815 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b42817-fba4-49ac-9215-3efbb04f0ef9" containerName="mariadb-account-create-update" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.517827 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c687c95-b35b-44f3-8db5-a32e1462e604" containerName="mariadb-database-create" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.517836 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="327bd993-33a8-41c1-81e5-a3e8d92ff438" containerName="mariadb-account-create-update" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.517843 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a5c5668-58e5-4380-a864-1be4be778b9e" containerName="mariadb-account-create-update" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.517857 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9e08f8-216a-4f56-a8ac-e7f53147c636" containerName="mariadb-database-create" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.518345 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n669j" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.522145 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dvz62" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.525147 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.544105 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-n669j"] Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.570294 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-config-data\") pod \"glance-db-sync-n669j\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " pod="openstack/glance-db-sync-n669j" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.570371 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcdpm\" (UniqueName: \"kubernetes.io/projected/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-kube-api-access-tcdpm\") pod \"glance-db-sync-n669j\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " pod="openstack/glance-db-sync-n669j" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.570417 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-combined-ca-bundle\") pod \"glance-db-sync-n669j\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " pod="openstack/glance-db-sync-n669j" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.570440 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-db-sync-config-data\") pod \"glance-db-sync-n669j\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " pod="openstack/glance-db-sync-n669j" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.672354 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-config-data\") pod \"glance-db-sync-n669j\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " pod="openstack/glance-db-sync-n669j" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.672496 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcdpm\" (UniqueName: \"kubernetes.io/projected/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-kube-api-access-tcdpm\") pod \"glance-db-sync-n669j\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " pod="openstack/glance-db-sync-n669j" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.672560 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-combined-ca-bundle\") pod \"glance-db-sync-n669j\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " pod="openstack/glance-db-sync-n669j" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.672619 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-db-sync-config-data\") pod \"glance-db-sync-n669j\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " pod="openstack/glance-db-sync-n669j" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.680960 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-combined-ca-bundle\") pod \"glance-db-sync-n669j\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " pod="openstack/glance-db-sync-n669j" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.681305 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-db-sync-config-data\") pod \"glance-db-sync-n669j\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " pod="openstack/glance-db-sync-n669j" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.682532 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-config-data\") pod \"glance-db-sync-n669j\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " pod="openstack/glance-db-sync-n669j" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.702260 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcdpm\" (UniqueName: \"kubernetes.io/projected/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-kube-api-access-tcdpm\") pod \"glance-db-sync-n669j\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " pod="openstack/glance-db-sync-n669j" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.784635 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 25 19:51:24 crc kubenswrapper[4775]: I1125 19:51:24.841623 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n669j" Nov 25 19:51:25 crc kubenswrapper[4775]: I1125 19:51:25.382043 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-n669j"] Nov 25 19:51:25 crc kubenswrapper[4775]: W1125 19:51:25.388141 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbe5d55e_4118_479b_bd86_f3b8cdac21a8.slice/crio-cc70b3de4db7d0b284e7fe0649e1616c678a2913617ea52f953b028f25a916c4 WatchSource:0}: Error finding container cc70b3de4db7d0b284e7fe0649e1616c678a2913617ea52f953b028f25a916c4: Status 404 returned error can't find the container with id cc70b3de4db7d0b284e7fe0649e1616c678a2913617ea52f953b028f25a916c4 Nov 25 19:51:26 crc kubenswrapper[4775]: I1125 19:51:26.333132 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n669j" event={"ID":"cbe5d55e-4118-479b-bd86-f3b8cdac21a8","Type":"ContainerStarted","Data":"cc70b3de4db7d0b284e7fe0649e1616c678a2913617ea52f953b028f25a916c4"} Nov 25 19:51:29 crc kubenswrapper[4775]: I1125 19:51:29.076361 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-k9862" podUID="462d24f9-e5cf-42b4-905e-13fa5f5716fe" containerName="ovn-controller" probeResult="failure" output=< Nov 25 19:51:29 crc kubenswrapper[4775]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 19:51:29 crc kubenswrapper[4775]: > Nov 25 19:51:33 crc kubenswrapper[4775]: I1125 19:51:33.397003 4775 generic.go:334] "Generic (PLEG): container finished" podID="50995ab5-ef22-4466-9906-fab208c9a82d" containerID="eca5654027a0fbf8762eb645c694f281694d16185581ba50a6c0f812a8d51bf0" exitCode=0 Nov 25 19:51:33 crc kubenswrapper[4775]: I1125 19:51:33.397170 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"50995ab5-ef22-4466-9906-fab208c9a82d","Type":"ContainerDied","Data":"eca5654027a0fbf8762eb645c694f281694d16185581ba50a6c0f812a8d51bf0"} Nov 25 19:51:33 crc kubenswrapper[4775]: I1125 19:51:33.400204 4775 generic.go:334] "Generic (PLEG): container finished" podID="58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" containerID="70ab4156a953c7a981921faac9a56dd17bc02ce461f66a3d558887ce12c31fec" exitCode=0 Nov 25 19:51:33 crc kubenswrapper[4775]: I1125 19:51:33.400233 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa","Type":"ContainerDied","Data":"70ab4156a953c7a981921faac9a56dd17bc02ce461f66a3d558887ce12c31fec"} Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.057053 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-k9862" podUID="462d24f9-e5cf-42b4-905e-13fa5f5716fe" containerName="ovn-controller" probeResult="failure" output=< Nov 25 19:51:34 crc kubenswrapper[4775]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 19:51:34 crc kubenswrapper[4775]: > Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.068236 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.097001 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ckpwc" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.285013 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-k9862-config-rbp4x"] Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.287275 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.290364 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.293984 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9862-config-rbp4x"] Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.447422 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/536c4362-3c5f-4f46-97d7-bae733d91ee7-scripts\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.447480 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/536c4362-3c5f-4f46-97d7-bae733d91ee7-additional-scripts\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.447515 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-run-ovn\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.447532 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5s66\" (UniqueName: \"kubernetes.io/projected/536c4362-3c5f-4f46-97d7-bae733d91ee7-kube-api-access-g5s66\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.447556 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-log-ovn\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.447575 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-run\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.550638 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/536c4362-3c5f-4f46-97d7-bae733d91ee7-scripts\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.550726 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/536c4362-3c5f-4f46-97d7-bae733d91ee7-additional-scripts\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.550760 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-run-ovn\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.550774 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5s66\" (UniqueName: \"kubernetes.io/projected/536c4362-3c5f-4f46-97d7-bae733d91ee7-kube-api-access-g5s66\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.550800 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-log-ovn\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.550828 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-run\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.552007 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-log-ovn\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.552079 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-run-ovn\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.552917 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-run\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.554116 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/536c4362-3c5f-4f46-97d7-bae733d91ee7-additional-scripts\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.555822 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/536c4362-3c5f-4f46-97d7-bae733d91ee7-scripts\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.576100 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5s66\" (UniqueName: \"kubernetes.io/projected/536c4362-3c5f-4f46-97d7-bae733d91ee7-kube-api-access-g5s66\") pod \"ovn-controller-k9862-config-rbp4x\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:34 crc kubenswrapper[4775]: I1125 19:51:34.626805 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:36 crc kubenswrapper[4775]: I1125 19:51:36.163334 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9862-config-rbp4x"] Nov 25 19:51:36 crc kubenswrapper[4775]: W1125 19:51:36.167536 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536c4362_3c5f_4f46_97d7_bae733d91ee7.slice/crio-918eee7106bf47190cb1a18df51d1680862153c7438ee4341e9fc072a2865390 WatchSource:0}: Error finding container 918eee7106bf47190cb1a18df51d1680862153c7438ee4341e9fc072a2865390: Status 404 returned error can't find the container with id 918eee7106bf47190cb1a18df51d1680862153c7438ee4341e9fc072a2865390 Nov 25 19:51:36 crc kubenswrapper[4775]: I1125 19:51:36.437283 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"50995ab5-ef22-4466-9906-fab208c9a82d","Type":"ContainerStarted","Data":"f9fffdec49e2431767bdf0527f58acdbdead4c1dd5782a9e9de7f9ec74f041e4"} Nov 25 19:51:36 crc kubenswrapper[4775]: I1125 19:51:36.437885 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 19:51:36 crc kubenswrapper[4775]: I1125 19:51:36.439765 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa","Type":"ContainerStarted","Data":"8b428b0b1f29761fa52693fc6473a2f550a49875260b184511dffc6f48656f45"} Nov 25 19:51:36 crc kubenswrapper[4775]: I1125 19:51:36.439977 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:51:36 crc kubenswrapper[4775]: I1125 19:51:36.442557 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n669j" event={"ID":"cbe5d55e-4118-479b-bd86-f3b8cdac21a8","Type":"ContainerStarted","Data":"5a89ff352ff21b8ff2081247f805e55c92e0be9a124d510f8a53e6e910d4abb6"} Nov 25 19:51:36 crc kubenswrapper[4775]: I1125 19:51:36.444227 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9862-config-rbp4x" event={"ID":"536c4362-3c5f-4f46-97d7-bae733d91ee7","Type":"ContainerStarted","Data":"918eee7106bf47190cb1a18df51d1680862153c7438ee4341e9fc072a2865390"} Nov 25 19:51:36 crc kubenswrapper[4775]: I1125 19:51:36.478157 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.028787217 podStartE2EDuration="1m2.478134665s" podCreationTimestamp="2025-11-25 19:50:34 +0000 UTC" firstStartedPulling="2025-11-25 19:50:47.510705893 +0000 UTC m=+1029.427068259" lastFinishedPulling="2025-11-25 19:50:56.960053311 +0000 UTC m=+1038.876415707" observedRunningTime="2025-11-25 19:51:36.465006124 +0000 UTC m=+1078.381368490" watchObservedRunningTime="2025-11-25 19:51:36.478134665 +0000 UTC m=+1078.394497031" Nov 25 19:51:36 crc kubenswrapper[4775]: I1125 19:51:36.506333 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.622028543 podStartE2EDuration="1m2.506302909s" podCreationTimestamp="2025-11-25 19:50:34 +0000 UTC" firstStartedPulling="2025-11-25 19:50:47.769910094 +0000 UTC m=+1029.686272480" lastFinishedPulling="2025-11-25 19:50:57.65418445 +0000 UTC m=+1039.570546846" observedRunningTime="2025-11-25 19:51:36.491130994 +0000 UTC m=+1078.407493350" watchObservedRunningTime="2025-11-25 19:51:36.506302909 +0000 UTC m=+1078.422665295" Nov 25 19:51:36 crc kubenswrapper[4775]: I1125 19:51:36.517745 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-n669j" podStartSLOduration=2.126595607 podStartE2EDuration="12.517722456s" podCreationTimestamp="2025-11-25 19:51:24 +0000 UTC" firstStartedPulling="2025-11-25 19:51:25.390984687 +0000 UTC m=+1067.307347063" lastFinishedPulling="2025-11-25 19:51:35.782111536 +0000 UTC m=+1077.698473912" observedRunningTime="2025-11-25 19:51:36.514284563 +0000 UTC m=+1078.430646959" watchObservedRunningTime="2025-11-25 19:51:36.517722456 +0000 UTC m=+1078.434084852" Nov 25 19:51:37 crc kubenswrapper[4775]: I1125 19:51:37.453755 4775 generic.go:334] "Generic (PLEG): container finished" podID="536c4362-3c5f-4f46-97d7-bae733d91ee7" containerID="b407b7b65ec6b0c5c625b26dfc6afc952b2c1727f7f6a8cd4cec69e56e8cddb8" exitCode=0 Nov 25 19:51:37 crc kubenswrapper[4775]: I1125 19:51:37.455241 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9862-config-rbp4x" event={"ID":"536c4362-3c5f-4f46-97d7-bae733d91ee7","Type":"ContainerDied","Data":"b407b7b65ec6b0c5c625b26dfc6afc952b2c1727f7f6a8cd4cec69e56e8cddb8"} Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.753441 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.827357 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-run-ovn\") pod \"536c4362-3c5f-4f46-97d7-bae733d91ee7\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.827418 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5s66\" (UniqueName: \"kubernetes.io/projected/536c4362-3c5f-4f46-97d7-bae733d91ee7-kube-api-access-g5s66\") pod \"536c4362-3c5f-4f46-97d7-bae733d91ee7\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.827442 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/536c4362-3c5f-4f46-97d7-bae733d91ee7-additional-scripts\") pod \"536c4362-3c5f-4f46-97d7-bae733d91ee7\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.827480 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-log-ovn\") pod \"536c4362-3c5f-4f46-97d7-bae733d91ee7\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.827463 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "536c4362-3c5f-4f46-97d7-bae733d91ee7" (UID: "536c4362-3c5f-4f46-97d7-bae733d91ee7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.827549 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-run\") pod \"536c4362-3c5f-4f46-97d7-bae733d91ee7\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.827570 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/536c4362-3c5f-4f46-97d7-bae733d91ee7-scripts\") pod \"536c4362-3c5f-4f46-97d7-bae733d91ee7\" (UID: \"536c4362-3c5f-4f46-97d7-bae733d91ee7\") " Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.827681 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "536c4362-3c5f-4f46-97d7-bae733d91ee7" (UID: "536c4362-3c5f-4f46-97d7-bae733d91ee7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.827758 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-run" (OuterVolumeSpecName: "var-run") pod "536c4362-3c5f-4f46-97d7-bae733d91ee7" (UID: "536c4362-3c5f-4f46-97d7-bae733d91ee7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.828124 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536c4362-3c5f-4f46-97d7-bae733d91ee7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "536c4362-3c5f-4f46-97d7-bae733d91ee7" (UID: "536c4362-3c5f-4f46-97d7-bae733d91ee7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.828502 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536c4362-3c5f-4f46-97d7-bae733d91ee7-scripts" (OuterVolumeSpecName: "scripts") pod "536c4362-3c5f-4f46-97d7-bae733d91ee7" (UID: "536c4362-3c5f-4f46-97d7-bae733d91ee7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.830586 4775 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.830618 4775 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/536c4362-3c5f-4f46-97d7-bae733d91ee7-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.830635 4775 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.830775 4775 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/536c4362-3c5f-4f46-97d7-bae733d91ee7-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.830791 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/536c4362-3c5f-4f46-97d7-bae733d91ee7-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.836416 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536c4362-3c5f-4f46-97d7-bae733d91ee7-kube-api-access-g5s66" (OuterVolumeSpecName: "kube-api-access-g5s66") pod "536c4362-3c5f-4f46-97d7-bae733d91ee7" (UID: "536c4362-3c5f-4f46-97d7-bae733d91ee7"). InnerVolumeSpecName "kube-api-access-g5s66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:38 crc kubenswrapper[4775]: I1125 19:51:38.932471 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5s66\" (UniqueName: \"kubernetes.io/projected/536c4362-3c5f-4f46-97d7-bae733d91ee7-kube-api-access-g5s66\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:39 crc kubenswrapper[4775]: I1125 19:51:39.075198 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-k9862" Nov 25 19:51:39 crc kubenswrapper[4775]: I1125 19:51:39.476749 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9862-config-rbp4x" event={"ID":"536c4362-3c5f-4f46-97d7-bae733d91ee7","Type":"ContainerDied","Data":"918eee7106bf47190cb1a18df51d1680862153c7438ee4341e9fc072a2865390"} Nov 25 19:51:39 crc kubenswrapper[4775]: I1125 19:51:39.477042 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="918eee7106bf47190cb1a18df51d1680862153c7438ee4341e9fc072a2865390" Nov 25 19:51:39 crc kubenswrapper[4775]: I1125 19:51:39.476826 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9862-config-rbp4x" Nov 25 19:51:39 crc kubenswrapper[4775]: I1125 19:51:39.877700 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-k9862-config-rbp4x"] Nov 25 19:51:39 crc kubenswrapper[4775]: I1125 19:51:39.904027 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-k9862-config-rbp4x"] Nov 25 19:51:40 crc kubenswrapper[4775]: I1125 19:51:40.860189 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536c4362-3c5f-4f46-97d7-bae733d91ee7" path="/var/lib/kubelet/pods/536c4362-3c5f-4f46-97d7-bae733d91ee7/volumes" Nov 25 19:51:42 crc kubenswrapper[4775]: I1125 19:51:42.506358 4775 generic.go:334] "Generic (PLEG): container finished" podID="cbe5d55e-4118-479b-bd86-f3b8cdac21a8" containerID="5a89ff352ff21b8ff2081247f805e55c92e0be9a124d510f8a53e6e910d4abb6" exitCode=0 Nov 25 19:51:42 crc kubenswrapper[4775]: I1125 19:51:42.506495 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n669j" event={"ID":"cbe5d55e-4118-479b-bd86-f3b8cdac21a8","Type":"ContainerDied","Data":"5a89ff352ff21b8ff2081247f805e55c92e0be9a124d510f8a53e6e910d4abb6"} Nov 25 19:51:43 crc kubenswrapper[4775]: I1125 19:51:43.941209 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n669j" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.016496 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-db-sync-config-data\") pod \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.016705 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-config-data\") pod \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.016785 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcdpm\" (UniqueName: \"kubernetes.io/projected/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-kube-api-access-tcdpm\") pod \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.016850 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-combined-ca-bundle\") pod \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\" (UID: \"cbe5d55e-4118-479b-bd86-f3b8cdac21a8\") " Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.022909 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cbe5d55e-4118-479b-bd86-f3b8cdac21a8" (UID: "cbe5d55e-4118-479b-bd86-f3b8cdac21a8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.023287 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-kube-api-access-tcdpm" (OuterVolumeSpecName: "kube-api-access-tcdpm") pod "cbe5d55e-4118-479b-bd86-f3b8cdac21a8" (UID: "cbe5d55e-4118-479b-bd86-f3b8cdac21a8"). InnerVolumeSpecName "kube-api-access-tcdpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.048250 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbe5d55e-4118-479b-bd86-f3b8cdac21a8" (UID: "cbe5d55e-4118-479b-bd86-f3b8cdac21a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.064092 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-config-data" (OuterVolumeSpecName: "config-data") pod "cbe5d55e-4118-479b-bd86-f3b8cdac21a8" (UID: "cbe5d55e-4118-479b-bd86-f3b8cdac21a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.119228 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.119499 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcdpm\" (UniqueName: \"kubernetes.io/projected/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-kube-api-access-tcdpm\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.119515 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.119526 4775 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cbe5d55e-4118-479b-bd86-f3b8cdac21a8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.528388 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n669j" event={"ID":"cbe5d55e-4118-479b-bd86-f3b8cdac21a8","Type":"ContainerDied","Data":"cc70b3de4db7d0b284e7fe0649e1616c678a2913617ea52f953b028f25a916c4"} Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.528442 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n669j" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.528450 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc70b3de4db7d0b284e7fe0649e1616c678a2913617ea52f953b028f25a916c4" Nov 25 19:51:44 crc kubenswrapper[4775]: E1125 19:51:44.813496 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536c4362_3c5f_4f46_97d7_bae733d91ee7.slice/crio-918eee7106bf47190cb1a18df51d1680862153c7438ee4341e9fc072a2865390\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536c4362_3c5f_4f46_97d7_bae733d91ee7.slice\": RecentStats: unable to find data in memory cache]" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.920078 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-qn6h5"] Nov 25 19:51:44 crc kubenswrapper[4775]: E1125 19:51:44.920370 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe5d55e-4118-479b-bd86-f3b8cdac21a8" containerName="glance-db-sync" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.920386 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe5d55e-4118-479b-bd86-f3b8cdac21a8" containerName="glance-db-sync" Nov 25 19:51:44 crc kubenswrapper[4775]: E1125 19:51:44.920403 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536c4362-3c5f-4f46-97d7-bae733d91ee7" containerName="ovn-config" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.920409 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="536c4362-3c5f-4f46-97d7-bae733d91ee7" containerName="ovn-config" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.920563 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe5d55e-4118-479b-bd86-f3b8cdac21a8" containerName="glance-db-sync" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.920579 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="536c4362-3c5f-4f46-97d7-bae733d91ee7" containerName="ovn-config" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.921341 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:44 crc kubenswrapper[4775]: I1125 19:51:44.938183 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-qn6h5"] Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.038587 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-config\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.038669 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.038741 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.038992 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.039062 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmqdp\" (UniqueName: \"kubernetes.io/projected/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-kube-api-access-bmqdp\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.140863 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.140943 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.140972 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmqdp\" (UniqueName: \"kubernetes.io/projected/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-kube-api-access-bmqdp\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.141000 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-config\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.141035 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.141902 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.141917 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-config\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.142010 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.142085 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.164839 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmqdp\" (UniqueName: \"kubernetes.io/projected/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-kube-api-access-bmqdp\") pod \"dnsmasq-dns-54f9b7b8d9-qn6h5\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.237950 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.466836 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 19:51:45 crc kubenswrapper[4775]: W1125 19:51:45.711783 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d9d8128_4199_4c1a_8e3d_ee81f10ef97a.slice/crio-e160b71d52a928f3f42d07aabb396f767aea2c05a39c13359cce97513a33a9c7 WatchSource:0}: Error finding container e160b71d52a928f3f42d07aabb396f767aea2c05a39c13359cce97513a33a9c7: Status 404 returned error can't find the container with id e160b71d52a928f3f42d07aabb396f767aea2c05a39c13359cce97513a33a9c7 Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.708981 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-qn6h5"] Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.742344 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-q6hvd"] Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.743228 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-q6hvd" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.765191 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-q6hvd"] Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.814087 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.850497 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkwdw\" (UniqueName: \"kubernetes.io/projected/8594b0a4-733b-4fb6-ad7c-1dc2c58a3908-kube-api-access-kkwdw\") pod \"cinder-db-create-q6hvd\" (UID: \"8594b0a4-733b-4fb6-ad7c-1dc2c58a3908\") " pod="openstack/cinder-db-create-q6hvd" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.850581 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8594b0a4-733b-4fb6-ad7c-1dc2c58a3908-operator-scripts\") pod \"cinder-db-create-q6hvd\" (UID: \"8594b0a4-733b-4fb6-ad7c-1dc2c58a3908\") " pod="openstack/cinder-db-create-q6hvd" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.857809 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-c8twt"] Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.858845 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c8twt" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.894372 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-841b-account-create-update-mrmc7"] Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.895301 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-841b-account-create-update-mrmc7" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.897285 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.902861 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-c8twt"] Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.907296 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-841b-account-create-update-mrmc7"] Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.951695 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d31220f5-79b6-47df-8501-3ce22c2fc213-operator-scripts\") pod \"barbican-841b-account-create-update-mrmc7\" (UID: \"d31220f5-79b6-47df-8501-3ce22c2fc213\") " pod="openstack/barbican-841b-account-create-update-mrmc7" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.951998 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkwdw\" (UniqueName: \"kubernetes.io/projected/8594b0a4-733b-4fb6-ad7c-1dc2c58a3908-kube-api-access-kkwdw\") pod \"cinder-db-create-q6hvd\" (UID: \"8594b0a4-733b-4fb6-ad7c-1dc2c58a3908\") " pod="openstack/cinder-db-create-q6hvd" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.952030 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7x4w\" (UniqueName: \"kubernetes.io/projected/d31220f5-79b6-47df-8501-3ce22c2fc213-kube-api-access-f7x4w\") pod \"barbican-841b-account-create-update-mrmc7\" (UID: \"d31220f5-79b6-47df-8501-3ce22c2fc213\") " pod="openstack/barbican-841b-account-create-update-mrmc7" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.952083 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8594b0a4-733b-4fb6-ad7c-1dc2c58a3908-operator-scripts\") pod \"cinder-db-create-q6hvd\" (UID: \"8594b0a4-733b-4fb6-ad7c-1dc2c58a3908\") " pod="openstack/cinder-db-create-q6hvd" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.952127 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4-operator-scripts\") pod \"barbican-db-create-c8twt\" (UID: \"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4\") " pod="openstack/barbican-db-create-c8twt" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.952262 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt4k6\" (UniqueName: \"kubernetes.io/projected/2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4-kube-api-access-wt4k6\") pod \"barbican-db-create-c8twt\" (UID: \"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4\") " pod="openstack/barbican-db-create-c8twt" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.953973 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8594b0a4-733b-4fb6-ad7c-1dc2c58a3908-operator-scripts\") pod \"cinder-db-create-q6hvd\" (UID: \"8594b0a4-733b-4fb6-ad7c-1dc2c58a3908\") " pod="openstack/cinder-db-create-q6hvd" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.961522 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ff43-account-create-update-8cktf"] Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.963043 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ff43-account-create-update-8cktf" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.971990 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.978557 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ff43-account-create-update-8cktf"] Nov 25 19:51:45 crc kubenswrapper[4775]: I1125 19:51:45.984948 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkwdw\" (UniqueName: \"kubernetes.io/projected/8594b0a4-733b-4fb6-ad7c-1dc2c58a3908-kube-api-access-kkwdw\") pod \"cinder-db-create-q6hvd\" (UID: \"8594b0a4-733b-4fb6-ad7c-1dc2c58a3908\") " pod="openstack/cinder-db-create-q6hvd" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.053944 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt4k6\" (UniqueName: \"kubernetes.io/projected/2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4-kube-api-access-wt4k6\") pod \"barbican-db-create-c8twt\" (UID: \"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4\") " pod="openstack/barbican-db-create-c8twt" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.054023 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d31220f5-79b6-47df-8501-3ce22c2fc213-operator-scripts\") pod \"barbican-841b-account-create-update-mrmc7\" (UID: \"d31220f5-79b6-47df-8501-3ce22c2fc213\") " pod="openstack/barbican-841b-account-create-update-mrmc7" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.054050 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8jxj\" (UniqueName: \"kubernetes.io/projected/34676258-9f7c-4e01-9e25-eacccc2f9a7f-kube-api-access-x8jxj\") pod \"cinder-ff43-account-create-update-8cktf\" (UID: \"34676258-9f7c-4e01-9e25-eacccc2f9a7f\") " pod="openstack/cinder-ff43-account-create-update-8cktf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.054074 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7x4w\" (UniqueName: \"kubernetes.io/projected/d31220f5-79b6-47df-8501-3ce22c2fc213-kube-api-access-f7x4w\") pod \"barbican-841b-account-create-update-mrmc7\" (UID: \"d31220f5-79b6-47df-8501-3ce22c2fc213\") " pod="openstack/barbican-841b-account-create-update-mrmc7" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.054120 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4-operator-scripts\") pod \"barbican-db-create-c8twt\" (UID: \"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4\") " pod="openstack/barbican-db-create-c8twt" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.054167 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34676258-9f7c-4e01-9e25-eacccc2f9a7f-operator-scripts\") pod \"cinder-ff43-account-create-update-8cktf\" (UID: \"34676258-9f7c-4e01-9e25-eacccc2f9a7f\") " pod="openstack/cinder-ff43-account-create-update-8cktf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.054860 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d31220f5-79b6-47df-8501-3ce22c2fc213-operator-scripts\") pod \"barbican-841b-account-create-update-mrmc7\" (UID: \"d31220f5-79b6-47df-8501-3ce22c2fc213\") " pod="openstack/barbican-841b-account-create-update-mrmc7" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.055073 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4-operator-scripts\") pod \"barbican-db-create-c8twt\" (UID: \"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4\") " pod="openstack/barbican-db-create-c8twt" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.067534 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt4k6\" (UniqueName: \"kubernetes.io/projected/2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4-kube-api-access-wt4k6\") pod \"barbican-db-create-c8twt\" (UID: \"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4\") " pod="openstack/barbican-db-create-c8twt" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.074013 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7x4w\" (UniqueName: \"kubernetes.io/projected/d31220f5-79b6-47df-8501-3ce22c2fc213-kube-api-access-f7x4w\") pod \"barbican-841b-account-create-update-mrmc7\" (UID: \"d31220f5-79b6-47df-8501-3ce22c2fc213\") " pod="openstack/barbican-841b-account-create-update-mrmc7" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.100617 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-m2srp"] Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.101582 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-m2srp" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.104171 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.104287 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.105305 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.105336 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xlssr" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.111047 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-q6hvd" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.124628 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-m2srp"] Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.155523 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-config-data\") pod \"keystone-db-sync-m2srp\" (UID: \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\") " pod="openstack/keystone-db-sync-m2srp" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.155595 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8jxj\" (UniqueName: \"kubernetes.io/projected/34676258-9f7c-4e01-9e25-eacccc2f9a7f-kube-api-access-x8jxj\") pod \"cinder-ff43-account-create-update-8cktf\" (UID: \"34676258-9f7c-4e01-9e25-eacccc2f9a7f\") " pod="openstack/cinder-ff43-account-create-update-8cktf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.155626 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jrxr\" (UniqueName: \"kubernetes.io/projected/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-kube-api-access-8jrxr\") pod \"keystone-db-sync-m2srp\" (UID: \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\") " pod="openstack/keystone-db-sync-m2srp" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.155689 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34676258-9f7c-4e01-9e25-eacccc2f9a7f-operator-scripts\") pod \"cinder-ff43-account-create-update-8cktf\" (UID: \"34676258-9f7c-4e01-9e25-eacccc2f9a7f\") " pod="openstack/cinder-ff43-account-create-update-8cktf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.155724 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-combined-ca-bundle\") pod \"keystone-db-sync-m2srp\" (UID: \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\") " pod="openstack/keystone-db-sync-m2srp" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.156912 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34676258-9f7c-4e01-9e25-eacccc2f9a7f-operator-scripts\") pod \"cinder-ff43-account-create-update-8cktf\" (UID: \"34676258-9f7c-4e01-9e25-eacccc2f9a7f\") " pod="openstack/cinder-ff43-account-create-update-8cktf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.172097 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8jxj\" (UniqueName: \"kubernetes.io/projected/34676258-9f7c-4e01-9e25-eacccc2f9a7f-kube-api-access-x8jxj\") pod \"cinder-ff43-account-create-update-8cktf\" (UID: \"34676258-9f7c-4e01-9e25-eacccc2f9a7f\") " pod="openstack/cinder-ff43-account-create-update-8cktf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.196527 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c8twt" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.217239 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-841b-account-create-update-mrmc7" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.248930 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ht9vf"] Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.250124 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ht9vf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.258127 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-combined-ca-bundle\") pod \"keystone-db-sync-m2srp\" (UID: \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\") " pod="openstack/keystone-db-sync-m2srp" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.258179 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-config-data\") pod \"keystone-db-sync-m2srp\" (UID: \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\") " pod="openstack/keystone-db-sync-m2srp" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.258209 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chk45\" (UniqueName: \"kubernetes.io/projected/86c93add-5616-45ac-b00b-5269f071ce55-kube-api-access-chk45\") pod \"neutron-db-create-ht9vf\" (UID: \"86c93add-5616-45ac-b00b-5269f071ce55\") " pod="openstack/neutron-db-create-ht9vf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.258262 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jrxr\" (UniqueName: \"kubernetes.io/projected/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-kube-api-access-8jrxr\") pod \"keystone-db-sync-m2srp\" (UID: \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\") " pod="openstack/keystone-db-sync-m2srp" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.258281 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c93add-5616-45ac-b00b-5269f071ce55-operator-scripts\") pod \"neutron-db-create-ht9vf\" (UID: \"86c93add-5616-45ac-b00b-5269f071ce55\") " pod="openstack/neutron-db-create-ht9vf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.262254 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-849c-account-create-update-bxwdn"] Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.269082 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-combined-ca-bundle\") pod \"keystone-db-sync-m2srp\" (UID: \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\") " pod="openstack/keystone-db-sync-m2srp" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.276624 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-849c-account-create-update-bxwdn" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.282955 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-config-data\") pod \"keystone-db-sync-m2srp\" (UID: \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\") " pod="openstack/keystone-db-sync-m2srp" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.283521 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.299409 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ht9vf"] Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.302156 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jrxr\" (UniqueName: \"kubernetes.io/projected/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-kube-api-access-8jrxr\") pod \"keystone-db-sync-m2srp\" (UID: \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\") " pod="openstack/keystone-db-sync-m2srp" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.333573 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-849c-account-create-update-bxwdn"] Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.333952 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ff43-account-create-update-8cktf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.360210 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c93add-5616-45ac-b00b-5269f071ce55-operator-scripts\") pod \"neutron-db-create-ht9vf\" (UID: \"86c93add-5616-45ac-b00b-5269f071ce55\") " pod="openstack/neutron-db-create-ht9vf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.360285 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxdlg\" (UniqueName: \"kubernetes.io/projected/e80ccc3b-e580-4303-a3d1-44c548db2e2e-kube-api-access-lxdlg\") pod \"neutron-849c-account-create-update-bxwdn\" (UID: \"e80ccc3b-e580-4303-a3d1-44c548db2e2e\") " pod="openstack/neutron-849c-account-create-update-bxwdn" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.360316 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e80ccc3b-e580-4303-a3d1-44c548db2e2e-operator-scripts\") pod \"neutron-849c-account-create-update-bxwdn\" (UID: \"e80ccc3b-e580-4303-a3d1-44c548db2e2e\") " pod="openstack/neutron-849c-account-create-update-bxwdn" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.360343 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chk45\" (UniqueName: \"kubernetes.io/projected/86c93add-5616-45ac-b00b-5269f071ce55-kube-api-access-chk45\") pod \"neutron-db-create-ht9vf\" (UID: \"86c93add-5616-45ac-b00b-5269f071ce55\") " pod="openstack/neutron-db-create-ht9vf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.370978 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c93add-5616-45ac-b00b-5269f071ce55-operator-scripts\") pod \"neutron-db-create-ht9vf\" (UID: \"86c93add-5616-45ac-b00b-5269f071ce55\") " pod="openstack/neutron-db-create-ht9vf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.391222 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chk45\" (UniqueName: \"kubernetes.io/projected/86c93add-5616-45ac-b00b-5269f071ce55-kube-api-access-chk45\") pod \"neutron-db-create-ht9vf\" (UID: \"86c93add-5616-45ac-b00b-5269f071ce55\") " pod="openstack/neutron-db-create-ht9vf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.439260 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-m2srp" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.461883 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxdlg\" (UniqueName: \"kubernetes.io/projected/e80ccc3b-e580-4303-a3d1-44c548db2e2e-kube-api-access-lxdlg\") pod \"neutron-849c-account-create-update-bxwdn\" (UID: \"e80ccc3b-e580-4303-a3d1-44c548db2e2e\") " pod="openstack/neutron-849c-account-create-update-bxwdn" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.462169 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e80ccc3b-e580-4303-a3d1-44c548db2e2e-operator-scripts\") pod \"neutron-849c-account-create-update-bxwdn\" (UID: \"e80ccc3b-e580-4303-a3d1-44c548db2e2e\") " pod="openstack/neutron-849c-account-create-update-bxwdn" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.462894 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e80ccc3b-e580-4303-a3d1-44c548db2e2e-operator-scripts\") pod \"neutron-849c-account-create-update-bxwdn\" (UID: \"e80ccc3b-e580-4303-a3d1-44c548db2e2e\") " pod="openstack/neutron-849c-account-create-update-bxwdn" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.497797 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxdlg\" (UniqueName: \"kubernetes.io/projected/e80ccc3b-e580-4303-a3d1-44c548db2e2e-kube-api-access-lxdlg\") pod \"neutron-849c-account-create-update-bxwdn\" (UID: \"e80ccc3b-e580-4303-a3d1-44c548db2e2e\") " pod="openstack/neutron-849c-account-create-update-bxwdn" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.551459 4775 generic.go:334] "Generic (PLEG): container finished" podID="1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" containerID="b32428d88b96b909ac56dbf6a6dce670e26ac277ea01561dfb377f4aa1f55a2f" exitCode=0 Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.551497 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" event={"ID":"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a","Type":"ContainerDied","Data":"b32428d88b96b909ac56dbf6a6dce670e26ac277ea01561dfb377f4aa1f55a2f"} Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.551521 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" event={"ID":"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a","Type":"ContainerStarted","Data":"e160b71d52a928f3f42d07aabb396f767aea2c05a39c13359cce97513a33a9c7"} Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.631209 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-q6hvd"] Nov 25 19:51:46 crc kubenswrapper[4775]: W1125 19:51:46.650700 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8594b0a4_733b_4fb6_ad7c_1dc2c58a3908.slice/crio-94017ea166960a3ebf808bf4531c4e8d866dd0bea2cecac6a4adb89d96993665 WatchSource:0}: Error finding container 94017ea166960a3ebf808bf4531c4e8d866dd0bea2cecac6a4adb89d96993665: Status 404 returned error can't find the container with id 94017ea166960a3ebf808bf4531c4e8d866dd0bea2cecac6a4adb89d96993665 Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.676516 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ht9vf" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.702810 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-849c-account-create-update-bxwdn" Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.796220 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-841b-account-create-update-mrmc7"] Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.813675 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-c8twt"] Nov 25 19:51:46 crc kubenswrapper[4775]: W1125 19:51:46.832365 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd31220f5_79b6_47df_8501_3ce22c2fc213.slice/crio-31d69bad19e564766d1dc1661cf84265e5165cdb9abce2761d68d7fb4dbaa21f WatchSource:0}: Error finding container 31d69bad19e564766d1dc1661cf84265e5165cdb9abce2761d68d7fb4dbaa21f: Status 404 returned error can't find the container with id 31d69bad19e564766d1dc1661cf84265e5165cdb9abce2761d68d7fb4dbaa21f Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.946548 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ff43-account-create-update-8cktf"] Nov 25 19:51:46 crc kubenswrapper[4775]: W1125 19:51:46.968863 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34676258_9f7c_4e01_9e25_eacccc2f9a7f.slice/crio-170d7f020cdb27aa81ae73b835ac2a2822a281d2e82da202b7b080d28b905265 WatchSource:0}: Error finding container 170d7f020cdb27aa81ae73b835ac2a2822a281d2e82da202b7b080d28b905265: Status 404 returned error can't find the container with id 170d7f020cdb27aa81ae73b835ac2a2822a281d2e82da202b7b080d28b905265 Nov 25 19:51:46 crc kubenswrapper[4775]: I1125 19:51:46.972582 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ht9vf"] Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.013678 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-m2srp"] Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.309225 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-849c-account-create-update-bxwdn"] Nov 25 19:51:47 crc kubenswrapper[4775]: W1125 19:51:47.329848 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80ccc3b_e580_4303_a3d1_44c548db2e2e.slice/crio-7591ec1f4f6fe08bff72938ab6e8f12712dad0dd4b734885430f6ac1f2ec5e40 WatchSource:0}: Error finding container 7591ec1f4f6fe08bff72938ab6e8f12712dad0dd4b734885430f6ac1f2ec5e40: Status 404 returned error can't find the container with id 7591ec1f4f6fe08bff72938ab6e8f12712dad0dd4b734885430f6ac1f2ec5e40 Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.563613 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ht9vf" event={"ID":"86c93add-5616-45ac-b00b-5269f071ce55","Type":"ContainerStarted","Data":"53b57cdcbfadf01205a51fefc908bb3dfe709d7a0f68388e1fde383dd23a8d21"} Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.563886 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ht9vf" event={"ID":"86c93add-5616-45ac-b00b-5269f071ce55","Type":"ContainerStarted","Data":"884e49d537618f721e9ac7c46b477d68309a6bed3bb2ccfb47b1f8075b13f4e0"} Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.565990 4775 generic.go:334] "Generic (PLEG): container finished" podID="8594b0a4-733b-4fb6-ad7c-1dc2c58a3908" containerID="ea6769a520ec6a84bd4ba8c5438d944ca0ee470bd235c1f5944f35378987f528" exitCode=0 Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.566097 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-q6hvd" event={"ID":"8594b0a4-733b-4fb6-ad7c-1dc2c58a3908","Type":"ContainerDied","Data":"ea6769a520ec6a84bd4ba8c5438d944ca0ee470bd235c1f5944f35378987f528"} Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.566118 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-q6hvd" event={"ID":"8594b0a4-733b-4fb6-ad7c-1dc2c58a3908","Type":"ContainerStarted","Data":"94017ea166960a3ebf808bf4531c4e8d866dd0bea2cecac6a4adb89d96993665"} Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.570833 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c8twt" event={"ID":"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4","Type":"ContainerStarted","Data":"a49f3d3429050ce25af960286f05cce53c8850d86172437d97f22ae09683f77a"} Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.570869 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c8twt" event={"ID":"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4","Type":"ContainerStarted","Data":"ca55cf0509b6ee7e1a7b48ca57c3f54a849155e3e545c1818f9c03711e952988"} Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.577893 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-849c-account-create-update-bxwdn" event={"ID":"e80ccc3b-e580-4303-a3d1-44c548db2e2e","Type":"ContainerStarted","Data":"7591ec1f4f6fe08bff72938ab6e8f12712dad0dd4b734885430f6ac1f2ec5e40"} Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.580274 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-ht9vf" podStartSLOduration=1.580262733 podStartE2EDuration="1.580262733s" podCreationTimestamp="2025-11-25 19:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:51:47.579197155 +0000 UTC m=+1089.495559521" watchObservedRunningTime="2025-11-25 19:51:47.580262733 +0000 UTC m=+1089.496625099" Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.589222 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" event={"ID":"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a","Type":"ContainerStarted","Data":"e67fb946cde1bec2112257c7ae7185cb66aff549f1dbea10e3350ab8c7e10500"} Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.589919 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.594406 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-841b-account-create-update-mrmc7" event={"ID":"d31220f5-79b6-47df-8501-3ce22c2fc213","Type":"ContainerStarted","Data":"71df4547b2f6de0e06a65d6289cf1d715015bbfb6a60aa2843987bd6833b7fcd"} Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.594457 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-841b-account-create-update-mrmc7" event={"ID":"d31220f5-79b6-47df-8501-3ce22c2fc213","Type":"ContainerStarted","Data":"31d69bad19e564766d1dc1661cf84265e5165cdb9abce2761d68d7fb4dbaa21f"} Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.604377 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ff43-account-create-update-8cktf" event={"ID":"34676258-9f7c-4e01-9e25-eacccc2f9a7f","Type":"ContainerStarted","Data":"4db6f50bf740bae07b01352ba6f443bc798bcb497062213ed6afc183fb31b4c6"} Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.604437 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ff43-account-create-update-8cktf" event={"ID":"34676258-9f7c-4e01-9e25-eacccc2f9a7f","Type":"ContainerStarted","Data":"170d7f020cdb27aa81ae73b835ac2a2822a281d2e82da202b7b080d28b905265"} Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.606967 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-m2srp" event={"ID":"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2","Type":"ContainerStarted","Data":"f22c7c629405e67d4b32b9208ad84f8ec8e12f706b2a41a64f4ba4e75ca3a0f5"} Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.615214 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-c8twt" podStartSLOduration=2.615198078 podStartE2EDuration="2.615198078s" podCreationTimestamp="2025-11-25 19:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:51:47.614972962 +0000 UTC m=+1089.531335328" watchObservedRunningTime="2025-11-25 19:51:47.615198078 +0000 UTC m=+1089.531560444" Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.633477 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ff43-account-create-update-8cktf" podStartSLOduration=2.633458928 podStartE2EDuration="2.633458928s" podCreationTimestamp="2025-11-25 19:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:51:47.626635295 +0000 UTC m=+1089.542997661" watchObservedRunningTime="2025-11-25 19:51:47.633458928 +0000 UTC m=+1089.549821294" Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.653054 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-841b-account-create-update-mrmc7" podStartSLOduration=2.653028761 podStartE2EDuration="2.653028761s" podCreationTimestamp="2025-11-25 19:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:51:47.64438652 +0000 UTC m=+1089.560748896" watchObservedRunningTime="2025-11-25 19:51:47.653028761 +0000 UTC m=+1089.569391127" Nov 25 19:51:47 crc kubenswrapper[4775]: I1125 19:51:47.667397 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" podStartSLOduration=3.667381326 podStartE2EDuration="3.667381326s" podCreationTimestamp="2025-11-25 19:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:51:47.659930356 +0000 UTC m=+1089.576292722" watchObservedRunningTime="2025-11-25 19:51:47.667381326 +0000 UTC m=+1089.583743692" Nov 25 19:51:48 crc kubenswrapper[4775]: I1125 19:51:48.617410 4775 generic.go:334] "Generic (PLEG): container finished" podID="2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4" containerID="a49f3d3429050ce25af960286f05cce53c8850d86172437d97f22ae09683f77a" exitCode=0 Nov 25 19:51:48 crc kubenswrapper[4775]: I1125 19:51:48.617493 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c8twt" event={"ID":"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4","Type":"ContainerDied","Data":"a49f3d3429050ce25af960286f05cce53c8850d86172437d97f22ae09683f77a"} Nov 25 19:51:48 crc kubenswrapper[4775]: I1125 19:51:48.619786 4775 generic.go:334] "Generic (PLEG): container finished" podID="e80ccc3b-e580-4303-a3d1-44c548db2e2e" containerID="97d3fdc25f88e86a23ffa46a33a486e2278406293c1a7d902fc6c7c50d8725dc" exitCode=0 Nov 25 19:51:48 crc kubenswrapper[4775]: I1125 19:51:48.619878 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-849c-account-create-update-bxwdn" event={"ID":"e80ccc3b-e580-4303-a3d1-44c548db2e2e","Type":"ContainerDied","Data":"97d3fdc25f88e86a23ffa46a33a486e2278406293c1a7d902fc6c7c50d8725dc"} Nov 25 19:51:48 crc kubenswrapper[4775]: I1125 19:51:48.621413 4775 generic.go:334] "Generic (PLEG): container finished" podID="86c93add-5616-45ac-b00b-5269f071ce55" containerID="53b57cdcbfadf01205a51fefc908bb3dfe709d7a0f68388e1fde383dd23a8d21" exitCode=0 Nov 25 19:51:48 crc kubenswrapper[4775]: I1125 19:51:48.621454 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ht9vf" event={"ID":"86c93add-5616-45ac-b00b-5269f071ce55","Type":"ContainerDied","Data":"53b57cdcbfadf01205a51fefc908bb3dfe709d7a0f68388e1fde383dd23a8d21"} Nov 25 19:51:48 crc kubenswrapper[4775]: I1125 19:51:48.623236 4775 generic.go:334] "Generic (PLEG): container finished" podID="d31220f5-79b6-47df-8501-3ce22c2fc213" containerID="71df4547b2f6de0e06a65d6289cf1d715015bbfb6a60aa2843987bd6833b7fcd" exitCode=0 Nov 25 19:51:48 crc kubenswrapper[4775]: I1125 19:51:48.623304 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-841b-account-create-update-mrmc7" event={"ID":"d31220f5-79b6-47df-8501-3ce22c2fc213","Type":"ContainerDied","Data":"71df4547b2f6de0e06a65d6289cf1d715015bbfb6a60aa2843987bd6833b7fcd"} Nov 25 19:51:48 crc kubenswrapper[4775]: I1125 19:51:48.626153 4775 generic.go:334] "Generic (PLEG): container finished" podID="34676258-9f7c-4e01-9e25-eacccc2f9a7f" containerID="4db6f50bf740bae07b01352ba6f443bc798bcb497062213ed6afc183fb31b4c6" exitCode=0 Nov 25 19:51:48 crc kubenswrapper[4775]: I1125 19:51:48.626176 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ff43-account-create-update-8cktf" event={"ID":"34676258-9f7c-4e01-9e25-eacccc2f9a7f","Type":"ContainerDied","Data":"4db6f50bf740bae07b01352ba6f443bc798bcb497062213ed6afc183fb31b4c6"} Nov 25 19:51:48 crc kubenswrapper[4775]: I1125 19:51:48.963544 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-q6hvd" Nov 25 19:51:49 crc kubenswrapper[4775]: I1125 19:51:49.015336 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8594b0a4-733b-4fb6-ad7c-1dc2c58a3908-operator-scripts\") pod \"8594b0a4-733b-4fb6-ad7c-1dc2c58a3908\" (UID: \"8594b0a4-733b-4fb6-ad7c-1dc2c58a3908\") " Nov 25 19:51:49 crc kubenswrapper[4775]: I1125 19:51:49.015516 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkwdw\" (UniqueName: \"kubernetes.io/projected/8594b0a4-733b-4fb6-ad7c-1dc2c58a3908-kube-api-access-kkwdw\") pod \"8594b0a4-733b-4fb6-ad7c-1dc2c58a3908\" (UID: \"8594b0a4-733b-4fb6-ad7c-1dc2c58a3908\") " Nov 25 19:51:49 crc kubenswrapper[4775]: I1125 19:51:49.016021 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8594b0a4-733b-4fb6-ad7c-1dc2c58a3908-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8594b0a4-733b-4fb6-ad7c-1dc2c58a3908" (UID: "8594b0a4-733b-4fb6-ad7c-1dc2c58a3908"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:49 crc kubenswrapper[4775]: I1125 19:51:49.016198 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8594b0a4-733b-4fb6-ad7c-1dc2c58a3908-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:49 crc kubenswrapper[4775]: I1125 19:51:49.023720 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8594b0a4-733b-4fb6-ad7c-1dc2c58a3908-kube-api-access-kkwdw" (OuterVolumeSpecName: "kube-api-access-kkwdw") pod "8594b0a4-733b-4fb6-ad7c-1dc2c58a3908" (UID: "8594b0a4-733b-4fb6-ad7c-1dc2c58a3908"). InnerVolumeSpecName "kube-api-access-kkwdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:49 crc kubenswrapper[4775]: I1125 19:51:49.118172 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkwdw\" (UniqueName: \"kubernetes.io/projected/8594b0a4-733b-4fb6-ad7c-1dc2c58a3908-kube-api-access-kkwdw\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:49 crc kubenswrapper[4775]: I1125 19:51:49.636469 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-q6hvd" event={"ID":"8594b0a4-733b-4fb6-ad7c-1dc2c58a3908","Type":"ContainerDied","Data":"94017ea166960a3ebf808bf4531c4e8d866dd0bea2cecac6a4adb89d96993665"} Nov 25 19:51:49 crc kubenswrapper[4775]: I1125 19:51:49.636848 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94017ea166960a3ebf808bf4531c4e8d866dd0bea2cecac6a4adb89d96993665" Nov 25 19:51:49 crc kubenswrapper[4775]: I1125 19:51:49.636787 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-q6hvd" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.098383 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ht9vf" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.209668 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c8twt" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.218707 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-849c-account-create-update-bxwdn" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.228920 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ff43-account-create-update-8cktf" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.239319 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-841b-account-create-update-mrmc7" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.245750 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chk45\" (UniqueName: \"kubernetes.io/projected/86c93add-5616-45ac-b00b-5269f071ce55-kube-api-access-chk45\") pod \"86c93add-5616-45ac-b00b-5269f071ce55\" (UID: \"86c93add-5616-45ac-b00b-5269f071ce55\") " Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.245972 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c93add-5616-45ac-b00b-5269f071ce55-operator-scripts\") pod \"86c93add-5616-45ac-b00b-5269f071ce55\" (UID: \"86c93add-5616-45ac-b00b-5269f071ce55\") " Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.247456 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86c93add-5616-45ac-b00b-5269f071ce55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86c93add-5616-45ac-b00b-5269f071ce55" (UID: "86c93add-5616-45ac-b00b-5269f071ce55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.254003 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c93add-5616-45ac-b00b-5269f071ce55-kube-api-access-chk45" (OuterVolumeSpecName: "kube-api-access-chk45") pod "86c93add-5616-45ac-b00b-5269f071ce55" (UID: "86c93add-5616-45ac-b00b-5269f071ce55"). InnerVolumeSpecName "kube-api-access-chk45". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.347072 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e80ccc3b-e580-4303-a3d1-44c548db2e2e-operator-scripts\") pod \"e80ccc3b-e580-4303-a3d1-44c548db2e2e\" (UID: \"e80ccc3b-e580-4303-a3d1-44c548db2e2e\") " Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.347147 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d31220f5-79b6-47df-8501-3ce22c2fc213-operator-scripts\") pod \"d31220f5-79b6-47df-8501-3ce22c2fc213\" (UID: \"d31220f5-79b6-47df-8501-3ce22c2fc213\") " Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.347184 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34676258-9f7c-4e01-9e25-eacccc2f9a7f-operator-scripts\") pod \"34676258-9f7c-4e01-9e25-eacccc2f9a7f\" (UID: \"34676258-9f7c-4e01-9e25-eacccc2f9a7f\") " Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.347234 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4-operator-scripts\") pod \"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4\" (UID: \"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4\") " Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.347288 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt4k6\" (UniqueName: \"kubernetes.io/projected/2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4-kube-api-access-wt4k6\") pod \"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4\" (UID: \"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4\") " Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.347529 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e80ccc3b-e580-4303-a3d1-44c548db2e2e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e80ccc3b-e580-4303-a3d1-44c548db2e2e" (UID: "e80ccc3b-e580-4303-a3d1-44c548db2e2e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.347903 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7x4w\" (UniqueName: \"kubernetes.io/projected/d31220f5-79b6-47df-8501-3ce22c2fc213-kube-api-access-f7x4w\") pod \"d31220f5-79b6-47df-8501-3ce22c2fc213\" (UID: \"d31220f5-79b6-47df-8501-3ce22c2fc213\") " Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.347972 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31220f5-79b6-47df-8501-3ce22c2fc213-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d31220f5-79b6-47df-8501-3ce22c2fc213" (UID: "d31220f5-79b6-47df-8501-3ce22c2fc213"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.347849 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34676258-9f7c-4e01-9e25-eacccc2f9a7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34676258-9f7c-4e01-9e25-eacccc2f9a7f" (UID: "34676258-9f7c-4e01-9e25-eacccc2f9a7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.347866 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4" (UID: "2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.348099 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8jxj\" (UniqueName: \"kubernetes.io/projected/34676258-9f7c-4e01-9e25-eacccc2f9a7f-kube-api-access-x8jxj\") pod \"34676258-9f7c-4e01-9e25-eacccc2f9a7f\" (UID: \"34676258-9f7c-4e01-9e25-eacccc2f9a7f\") " Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.348188 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxdlg\" (UniqueName: \"kubernetes.io/projected/e80ccc3b-e580-4303-a3d1-44c548db2e2e-kube-api-access-lxdlg\") pod \"e80ccc3b-e580-4303-a3d1-44c548db2e2e\" (UID: \"e80ccc3b-e580-4303-a3d1-44c548db2e2e\") " Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.348925 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d31220f5-79b6-47df-8501-3ce22c2fc213-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.348956 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34676258-9f7c-4e01-9e25-eacccc2f9a7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.348974 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c93add-5616-45ac-b00b-5269f071ce55-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.348988 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.349008 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chk45\" (UniqueName: \"kubernetes.io/projected/86c93add-5616-45ac-b00b-5269f071ce55-kube-api-access-chk45\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.350235 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4-kube-api-access-wt4k6" (OuterVolumeSpecName: "kube-api-access-wt4k6") pod "2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4" (UID: "2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4"). InnerVolumeSpecName "kube-api-access-wt4k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.353442 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34676258-9f7c-4e01-9e25-eacccc2f9a7f-kube-api-access-x8jxj" (OuterVolumeSpecName: "kube-api-access-x8jxj") pod "34676258-9f7c-4e01-9e25-eacccc2f9a7f" (UID: "34676258-9f7c-4e01-9e25-eacccc2f9a7f"). InnerVolumeSpecName "kube-api-access-x8jxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.353984 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d31220f5-79b6-47df-8501-3ce22c2fc213-kube-api-access-f7x4w" (OuterVolumeSpecName: "kube-api-access-f7x4w") pod "d31220f5-79b6-47df-8501-3ce22c2fc213" (UID: "d31220f5-79b6-47df-8501-3ce22c2fc213"). InnerVolumeSpecName "kube-api-access-f7x4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.355511 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e80ccc3b-e580-4303-a3d1-44c548db2e2e-kube-api-access-lxdlg" (OuterVolumeSpecName: "kube-api-access-lxdlg") pod "e80ccc3b-e580-4303-a3d1-44c548db2e2e" (UID: "e80ccc3b-e580-4303-a3d1-44c548db2e2e"). InnerVolumeSpecName "kube-api-access-lxdlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.450965 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e80ccc3b-e580-4303-a3d1-44c548db2e2e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.451025 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt4k6\" (UniqueName: \"kubernetes.io/projected/2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4-kube-api-access-wt4k6\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.451056 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7x4w\" (UniqueName: \"kubernetes.io/projected/d31220f5-79b6-47df-8501-3ce22c2fc213-kube-api-access-f7x4w\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.451076 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8jxj\" (UniqueName: \"kubernetes.io/projected/34676258-9f7c-4e01-9e25-eacccc2f9a7f-kube-api-access-x8jxj\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.451095 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxdlg\" (UniqueName: \"kubernetes.io/projected/e80ccc3b-e580-4303-a3d1-44c548db2e2e-kube-api-access-lxdlg\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.655714 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ff43-account-create-update-8cktf" event={"ID":"34676258-9f7c-4e01-9e25-eacccc2f9a7f","Type":"ContainerDied","Data":"170d7f020cdb27aa81ae73b835ac2a2822a281d2e82da202b7b080d28b905265"} Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.656024 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="170d7f020cdb27aa81ae73b835ac2a2822a281d2e82da202b7b080d28b905265" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.656391 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ff43-account-create-update-8cktf" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.658483 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c8twt" event={"ID":"2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4","Type":"ContainerDied","Data":"ca55cf0509b6ee7e1a7b48ca57c3f54a849155e3e545c1818f9c03711e952988"} Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.658513 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca55cf0509b6ee7e1a7b48ca57c3f54a849155e3e545c1818f9c03711e952988" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.658568 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c8twt" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.664217 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-849c-account-create-update-bxwdn" event={"ID":"e80ccc3b-e580-4303-a3d1-44c548db2e2e","Type":"ContainerDied","Data":"7591ec1f4f6fe08bff72938ab6e8f12712dad0dd4b734885430f6ac1f2ec5e40"} Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.664285 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7591ec1f4f6fe08bff72938ab6e8f12712dad0dd4b734885430f6ac1f2ec5e40" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.664214 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-849c-account-create-update-bxwdn" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.666751 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ht9vf" event={"ID":"86c93add-5616-45ac-b00b-5269f071ce55","Type":"ContainerDied","Data":"884e49d537618f721e9ac7c46b477d68309a6bed3bb2ccfb47b1f8075b13f4e0"} Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.666780 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="884e49d537618f721e9ac7c46b477d68309a6bed3bb2ccfb47b1f8075b13f4e0" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.666809 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ht9vf" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.669339 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-841b-account-create-update-mrmc7" event={"ID":"d31220f5-79b6-47df-8501-3ce22c2fc213","Type":"ContainerDied","Data":"31d69bad19e564766d1dc1661cf84265e5165cdb9abce2761d68d7fb4dbaa21f"} Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.669374 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31d69bad19e564766d1dc1661cf84265e5165cdb9abce2761d68d7fb4dbaa21f" Nov 25 19:51:50 crc kubenswrapper[4775]: I1125 19:51:50.669409 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-841b-account-create-update-mrmc7" Nov 25 19:51:54 crc kubenswrapper[4775]: I1125 19:51:54.706743 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-m2srp" event={"ID":"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2","Type":"ContainerStarted","Data":"7fbe7ebac47f3b873475bc1a37b3df112bc9d1decebe9e2493c994ae345fdf52"} Nov 25 19:51:54 crc kubenswrapper[4775]: I1125 19:51:54.737455 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-m2srp" podStartSLOduration=1.5322183360000001 podStartE2EDuration="8.737434738s" podCreationTimestamp="2025-11-25 19:51:46 +0000 UTC" firstStartedPulling="2025-11-25 19:51:47.024691465 +0000 UTC m=+1088.941053831" lastFinishedPulling="2025-11-25 19:51:54.229907827 +0000 UTC m=+1096.146270233" observedRunningTime="2025-11-25 19:51:54.727074651 +0000 UTC m=+1096.643437017" watchObservedRunningTime="2025-11-25 19:51:54.737434738 +0000 UTC m=+1096.653797104" Nov 25 19:51:55 crc kubenswrapper[4775]: E1125 19:51:55.051552 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536c4362_3c5f_4f46_97d7_bae733d91ee7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536c4362_3c5f_4f46_97d7_bae733d91ee7.slice/crio-918eee7106bf47190cb1a18df51d1680862153c7438ee4341e9fc072a2865390\": RecentStats: unable to find data in memory cache]" Nov 25 19:51:55 crc kubenswrapper[4775]: I1125 19:51:55.240111 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:51:55 crc kubenswrapper[4775]: I1125 19:51:55.321842 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dtnnl"] Nov 25 19:51:55 crc kubenswrapper[4775]: I1125 19:51:55.322468 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" podUID="4bfaa898-d953-4c25-b2ca-76fe02819141" containerName="dnsmasq-dns" containerID="cri-o://4d49676b9aa807373934d3701e0dd51eb11699ce27ba8f5759aed2b1545c8afb" gracePeriod=10 Nov 25 19:51:55 crc kubenswrapper[4775]: I1125 19:51:55.718405 4775 generic.go:334] "Generic (PLEG): container finished" podID="4bfaa898-d953-4c25-b2ca-76fe02819141" containerID="4d49676b9aa807373934d3701e0dd51eb11699ce27ba8f5759aed2b1545c8afb" exitCode=0 Nov 25 19:51:55 crc kubenswrapper[4775]: I1125 19:51:55.718485 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" event={"ID":"4bfaa898-d953-4c25-b2ca-76fe02819141","Type":"ContainerDied","Data":"4d49676b9aa807373934d3701e0dd51eb11699ce27ba8f5759aed2b1545c8afb"} Nov 25 19:51:55 crc kubenswrapper[4775]: I1125 19:51:55.811177 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:55 crc kubenswrapper[4775]: I1125 19:51:55.969584 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spptw\" (UniqueName: \"kubernetes.io/projected/4bfaa898-d953-4c25-b2ca-76fe02819141-kube-api-access-spptw\") pod \"4bfaa898-d953-4c25-b2ca-76fe02819141\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " Nov 25 19:51:55 crc kubenswrapper[4775]: I1125 19:51:55.969991 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-ovsdbserver-sb\") pod \"4bfaa898-d953-4c25-b2ca-76fe02819141\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " Nov 25 19:51:55 crc kubenswrapper[4775]: I1125 19:51:55.970016 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-ovsdbserver-nb\") pod \"4bfaa898-d953-4c25-b2ca-76fe02819141\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " Nov 25 19:51:55 crc kubenswrapper[4775]: I1125 19:51:55.970053 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-config\") pod \"4bfaa898-d953-4c25-b2ca-76fe02819141\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " Nov 25 19:51:55 crc kubenswrapper[4775]: I1125 19:51:55.970150 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-dns-svc\") pod \"4bfaa898-d953-4c25-b2ca-76fe02819141\" (UID: \"4bfaa898-d953-4c25-b2ca-76fe02819141\") " Nov 25 19:51:55 crc kubenswrapper[4775]: I1125 19:51:55.982021 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bfaa898-d953-4c25-b2ca-76fe02819141-kube-api-access-spptw" (OuterVolumeSpecName: "kube-api-access-spptw") pod "4bfaa898-d953-4c25-b2ca-76fe02819141" (UID: "4bfaa898-d953-4c25-b2ca-76fe02819141"). InnerVolumeSpecName "kube-api-access-spptw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.027507 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bfaa898-d953-4c25-b2ca-76fe02819141" (UID: "4bfaa898-d953-4c25-b2ca-76fe02819141"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.050925 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-config" (OuterVolumeSpecName: "config") pod "4bfaa898-d953-4c25-b2ca-76fe02819141" (UID: "4bfaa898-d953-4c25-b2ca-76fe02819141"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.051233 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4bfaa898-d953-4c25-b2ca-76fe02819141" (UID: "4bfaa898-d953-4c25-b2ca-76fe02819141"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.051426 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4bfaa898-d953-4c25-b2ca-76fe02819141" (UID: "4bfaa898-d953-4c25-b2ca-76fe02819141"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.072177 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spptw\" (UniqueName: \"kubernetes.io/projected/4bfaa898-d953-4c25-b2ca-76fe02819141-kube-api-access-spptw\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.072209 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.072217 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.072225 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.072235 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bfaa898-d953-4c25-b2ca-76fe02819141-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.732316 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" event={"ID":"4bfaa898-d953-4c25-b2ca-76fe02819141","Type":"ContainerDied","Data":"b5055b64a545a3949d13002d472d2c5841528d30f65d5b0f0d1b0b7b004ba6ed"} Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.732395 4775 scope.go:117] "RemoveContainer" containerID="4d49676b9aa807373934d3701e0dd51eb11699ce27ba8f5759aed2b1545c8afb" Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.734210 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dtnnl" Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.768918 4775 scope.go:117] "RemoveContainer" containerID="8cbd96404e81414f09edcef62255acf6eeea7194d858d69e597b1d52c817574f" Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.798918 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dtnnl"] Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.812368 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dtnnl"] Nov 25 19:51:56 crc kubenswrapper[4775]: I1125 19:51:56.861068 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bfaa898-d953-4c25-b2ca-76fe02819141" path="/var/lib/kubelet/pods/4bfaa898-d953-4c25-b2ca-76fe02819141/volumes" Nov 25 19:51:57 crc kubenswrapper[4775]: I1125 19:51:57.746200 4775 generic.go:334] "Generic (PLEG): container finished" podID="ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2" containerID="7fbe7ebac47f3b873475bc1a37b3df112bc9d1decebe9e2493c994ae345fdf52" exitCode=0 Nov 25 19:51:57 crc kubenswrapper[4775]: I1125 19:51:57.746265 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-m2srp" event={"ID":"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2","Type":"ContainerDied","Data":"7fbe7ebac47f3b873475bc1a37b3df112bc9d1decebe9e2493c994ae345fdf52"} Nov 25 19:51:59 crc kubenswrapper[4775]: I1125 19:51:59.174376 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-m2srp" Nov 25 19:51:59 crc kubenswrapper[4775]: I1125 19:51:59.333233 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-config-data\") pod \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\" (UID: \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\") " Nov 25 19:51:59 crc kubenswrapper[4775]: I1125 19:51:59.333341 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jrxr\" (UniqueName: \"kubernetes.io/projected/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-kube-api-access-8jrxr\") pod \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\" (UID: \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\") " Nov 25 19:51:59 crc kubenswrapper[4775]: I1125 19:51:59.333467 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-combined-ca-bundle\") pod \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\" (UID: \"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2\") " Nov 25 19:51:59 crc kubenswrapper[4775]: I1125 19:51:59.342499 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-kube-api-access-8jrxr" (OuterVolumeSpecName: "kube-api-access-8jrxr") pod "ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2" (UID: "ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2"). InnerVolumeSpecName "kube-api-access-8jrxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:51:59 crc kubenswrapper[4775]: I1125 19:51:59.383316 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2" (UID: "ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:51:59 crc kubenswrapper[4775]: I1125 19:51:59.397109 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-config-data" (OuterVolumeSpecName: "config-data") pod "ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2" (UID: "ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:51:59 crc kubenswrapper[4775]: I1125 19:51:59.438452 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:59 crc kubenswrapper[4775]: I1125 19:51:59.439317 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jrxr\" (UniqueName: \"kubernetes.io/projected/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-kube-api-access-8jrxr\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:59 crc kubenswrapper[4775]: I1125 19:51:59.439414 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:51:59 crc kubenswrapper[4775]: I1125 19:51:59.776698 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-m2srp" event={"ID":"ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2","Type":"ContainerDied","Data":"f22c7c629405e67d4b32b9208ad84f8ec8e12f706b2a41a64f4ba4e75ca3a0f5"} Nov 25 19:51:59 crc kubenswrapper[4775]: I1125 19:51:59.776746 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f22c7c629405e67d4b32b9208ad84f8ec8e12f706b2a41a64f4ba4e75ca3a0f5" Nov 25 19:51:59 crc kubenswrapper[4775]: I1125 19:51:59.776814 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-m2srp" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.167153 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-256p6"] Nov 25 19:52:00 crc kubenswrapper[4775]: E1125 19:52:00.169989 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2" containerName="keystone-db-sync" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.170170 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2" containerName="keystone-db-sync" Nov 25 19:52:00 crc kubenswrapper[4775]: E1125 19:52:00.170302 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80ccc3b-e580-4303-a3d1-44c548db2e2e" containerName="mariadb-account-create-update" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.170389 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80ccc3b-e580-4303-a3d1-44c548db2e2e" containerName="mariadb-account-create-update" Nov 25 19:52:00 crc kubenswrapper[4775]: E1125 19:52:00.170481 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c93add-5616-45ac-b00b-5269f071ce55" containerName="mariadb-database-create" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.170554 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c93add-5616-45ac-b00b-5269f071ce55" containerName="mariadb-database-create" Nov 25 19:52:00 crc kubenswrapper[4775]: E1125 19:52:00.170631 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34676258-9f7c-4e01-9e25-eacccc2f9a7f" containerName="mariadb-account-create-update" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.170754 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="34676258-9f7c-4e01-9e25-eacccc2f9a7f" containerName="mariadb-account-create-update" Nov 25 19:52:00 crc kubenswrapper[4775]: E1125 19:52:00.170828 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4" containerName="mariadb-database-create" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.170897 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4" containerName="mariadb-database-create" Nov 25 19:52:00 crc kubenswrapper[4775]: E1125 19:52:00.171004 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfaa898-d953-4c25-b2ca-76fe02819141" containerName="dnsmasq-dns" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.171083 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfaa898-d953-4c25-b2ca-76fe02819141" containerName="dnsmasq-dns" Nov 25 19:52:00 crc kubenswrapper[4775]: E1125 19:52:00.171163 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8594b0a4-733b-4fb6-ad7c-1dc2c58a3908" containerName="mariadb-database-create" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.171248 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8594b0a4-733b-4fb6-ad7c-1dc2c58a3908" containerName="mariadb-database-create" Nov 25 19:52:00 crc kubenswrapper[4775]: E1125 19:52:00.171335 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d31220f5-79b6-47df-8501-3ce22c2fc213" containerName="mariadb-account-create-update" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.171408 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d31220f5-79b6-47df-8501-3ce22c2fc213" containerName="mariadb-account-create-update" Nov 25 19:52:00 crc kubenswrapper[4775]: E1125 19:52:00.171504 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfaa898-d953-4c25-b2ca-76fe02819141" containerName="init" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.171590 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfaa898-d953-4c25-b2ca-76fe02819141" containerName="init" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.172038 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="34676258-9f7c-4e01-9e25-eacccc2f9a7f" containerName="mariadb-account-create-update" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.172143 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2" containerName="keystone-db-sync" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.172210 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bfaa898-d953-4c25-b2ca-76fe02819141" containerName="dnsmasq-dns" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.172270 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c93add-5616-45ac-b00b-5269f071ce55" containerName="mariadb-database-create" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.172325 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d31220f5-79b6-47df-8501-3ce22c2fc213" containerName="mariadb-account-create-update" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.172385 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8594b0a4-733b-4fb6-ad7c-1dc2c58a3908" containerName="mariadb-database-create" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.172437 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80ccc3b-e580-4303-a3d1-44c548db2e2e" containerName="mariadb-account-create-update" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.172491 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4" containerName="mariadb-database-create" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.173488 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.179240 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.179473 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.179691 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xlssr" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.179836 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.180077 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.195433 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-hhnpn"] Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.199617 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.213747 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-256p6"] Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.247353 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-hhnpn"] Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.288872 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-dns-svc\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.288920 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-config\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.288945 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-fernet-keys\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.288964 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-config-data\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.288988 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.289039 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmtpx\" (UniqueName: \"kubernetes.io/projected/41c26c06-bec8-4207-ba4a-d25fd1a85a47-kube-api-access-tmtpx\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.289070 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-combined-ca-bundle\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.289100 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-scripts\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.289118 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.289137 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-credential-keys\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.289152 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkgbs\" (UniqueName: \"kubernetes.io/projected/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-kube-api-access-xkgbs\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.332280 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-p6t4d"] Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.333243 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.338631 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.338876 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.338980 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pp5wj" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.351113 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-p6t4d"] Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.390748 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpk6x\" (UniqueName: \"kubernetes.io/projected/bc8913d5-5107-422d-8554-1d8c951253fd-kube-api-access-gpk6x\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.390800 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc8913d5-5107-422d-8554-1d8c951253fd-etc-machine-id\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.390842 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-dns-svc\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.390864 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-config\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.390883 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-fernet-keys\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.390915 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-config-data\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.390933 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-scripts\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.390955 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.391006 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmtpx\" (UniqueName: \"kubernetes.io/projected/41c26c06-bec8-4207-ba4a-d25fd1a85a47-kube-api-access-tmtpx\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.391029 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-combined-ca-bundle\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.391050 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-combined-ca-bundle\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.391065 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-db-sync-config-data\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.391095 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-scripts\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.391112 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-config-data\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.391129 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.391147 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-credential-keys\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.391165 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkgbs\" (UniqueName: \"kubernetes.io/projected/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-kube-api-access-xkgbs\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.392759 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.392814 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.393203 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-dns-svc\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.393398 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-config\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.398526 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-config-data\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.401487 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-fernet-keys\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.401664 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-credential-keys\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.404380 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-scripts\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.409373 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-combined-ca-bundle\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.421668 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.423394 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.432817 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.433080 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.454470 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmtpx\" (UniqueName: \"kubernetes.io/projected/41c26c06-bec8-4207-ba4a-d25fd1a85a47-kube-api-access-tmtpx\") pod \"dnsmasq-dns-6546db6db7-hhnpn\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.473115 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.493496 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-scripts\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.493557 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85f8651-1533-4c47-94a7-8d9e5114771d-run-httpd\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.493593 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.493625 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-config-data\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.493666 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-combined-ca-bundle\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.493688 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.493726 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-db-sync-config-data\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.493756 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-config-data\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.493777 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85f8651-1533-4c47-94a7-8d9e5114771d-log-httpd\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.493803 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-scripts\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.493823 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpk6x\" (UniqueName: \"kubernetes.io/projected/bc8913d5-5107-422d-8554-1d8c951253fd-kube-api-access-gpk6x\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.493840 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc8913d5-5107-422d-8554-1d8c951253fd-etc-machine-id\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.493861 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj2bp\" (UniqueName: \"kubernetes.io/projected/f85f8651-1533-4c47-94a7-8d9e5114771d-kube-api-access-pj2bp\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.500253 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc8913d5-5107-422d-8554-1d8c951253fd-etc-machine-id\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.511790 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-combined-ca-bundle\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.512298 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-scripts\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.512378 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkgbs\" (UniqueName: \"kubernetes.io/projected/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-kube-api-access-xkgbs\") pod \"keystone-bootstrap-256p6\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.518478 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-config-data\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.518938 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.521998 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-db-sync-config-data\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.562587 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpk6x\" (UniqueName: \"kubernetes.io/projected/bc8913d5-5107-422d-8554-1d8c951253fd-kube-api-access-gpk6x\") pod \"cinder-db-sync-p6t4d\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.562667 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6vb6j"] Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.563723 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6vb6j" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.569374 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.570740 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.570968 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6trrw" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.585354 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6vb6j"] Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.598935 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-config-data\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.599686 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-config\") pod \"neutron-db-sync-6vb6j\" (UID: \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\") " pod="openstack/neutron-db-sync-6vb6j" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.599779 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.599811 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l75t7\" (UniqueName: \"kubernetes.io/projected/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-kube-api-access-l75t7\") pod \"neutron-db-sync-6vb6j\" (UID: \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\") " pod="openstack/neutron-db-sync-6vb6j" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.600157 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85f8651-1533-4c47-94a7-8d9e5114771d-log-httpd\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.600178 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-combined-ca-bundle\") pod \"neutron-db-sync-6vb6j\" (UID: \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\") " pod="openstack/neutron-db-sync-6vb6j" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.600200 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-scripts\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.600230 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj2bp\" (UniqueName: \"kubernetes.io/projected/f85f8651-1533-4c47-94a7-8d9e5114771d-kube-api-access-pj2bp\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.600287 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85f8651-1533-4c47-94a7-8d9e5114771d-run-httpd\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.600311 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.600753 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85f8651-1533-4c47-94a7-8d9e5114771d-log-httpd\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.604598 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-scripts\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.605152 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.605449 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85f8651-1533-4c47-94a7-8d9e5114771d-run-httpd\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.616079 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-config-data\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.616764 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.654172 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.701545 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj2bp\" (UniqueName: \"kubernetes.io/projected/f85f8651-1533-4c47-94a7-8d9e5114771d-kube-api-access-pj2bp\") pod \"ceilometer-0\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.701978 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xnrzx"] Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.703476 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xnrzx" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.706092 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-config\") pod \"neutron-db-sync-6vb6j\" (UID: \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\") " pod="openstack/neutron-db-sync-6vb6j" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.706735 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l75t7\" (UniqueName: \"kubernetes.io/projected/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-kube-api-access-l75t7\") pod \"neutron-db-sync-6vb6j\" (UID: \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\") " pod="openstack/neutron-db-sync-6vb6j" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.706865 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-combined-ca-bundle\") pod \"neutron-db-sync-6vb6j\" (UID: \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\") " pod="openstack/neutron-db-sync-6vb6j" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.727562 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-combined-ca-bundle\") pod \"neutron-db-sync-6vb6j\" (UID: \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\") " pod="openstack/neutron-db-sync-6vb6j" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.727898 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.728024 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-config\") pod \"neutron-db-sync-6vb6j\" (UID: \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\") " pod="openstack/neutron-db-sync-6vb6j" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.728088 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-8kq4w" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.759757 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xnrzx"] Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.772495 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l75t7\" (UniqueName: \"kubernetes.io/projected/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-kube-api-access-l75t7\") pod \"neutron-db-sync-6vb6j\" (UID: \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\") " pod="openstack/neutron-db-sync-6vb6j" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.784060 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7rf4s"] Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.785244 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.798477 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.798721 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-psflb" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.799066 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.806806 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.982147 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.982220 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7rf4s"] Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.982469 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6vb6j" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.997426 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-combined-ca-bundle\") pod \"barbican-db-sync-xnrzx\" (UID: \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\") " pod="openstack/barbican-db-sync-xnrzx" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.997808 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-db-sync-config-data\") pod \"barbican-db-sync-xnrzx\" (UID: \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\") " pod="openstack/barbican-db-sync-xnrzx" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.997829 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-combined-ca-bundle\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.997907 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-scripts\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.997933 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn8zj\" (UniqueName: \"kubernetes.io/projected/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-kube-api-access-kn8zj\") pod \"barbican-db-sync-xnrzx\" (UID: \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\") " pod="openstack/barbican-db-sync-xnrzx" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.997963 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpqpq\" (UniqueName: \"kubernetes.io/projected/61bc599e-4476-4c80-9749-97b489f52a22-kube-api-access-bpqpq\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.998373 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-config-data\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.998398 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61bc599e-4476-4c80-9749-97b489f52a22-logs\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:00 crc kubenswrapper[4775]: I1125 19:52:00.998627 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-hhnpn"] Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.022133 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-rmzbw"] Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.025073 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.036202 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-rmzbw"] Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.103684 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-combined-ca-bundle\") pod \"barbican-db-sync-xnrzx\" (UID: \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\") " pod="openstack/barbican-db-sync-xnrzx" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.103780 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.103803 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.103820 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-db-sync-config-data\") pod \"barbican-db-sync-xnrzx\" (UID: \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\") " pod="openstack/barbican-db-sync-xnrzx" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.103835 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-combined-ca-bundle\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.103893 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-scripts\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.103939 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn8zj\" (UniqueName: \"kubernetes.io/projected/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-kube-api-access-kn8zj\") pod \"barbican-db-sync-xnrzx\" (UID: \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\") " pod="openstack/barbican-db-sync-xnrzx" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.103958 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt5gt\" (UniqueName: \"kubernetes.io/projected/e2523e8b-c047-426a-908a-5b91c0e764bd-kube-api-access-bt5gt\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.104003 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpqpq\" (UniqueName: \"kubernetes.io/projected/61bc599e-4476-4c80-9749-97b489f52a22-kube-api-access-bpqpq\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.104025 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-config-data\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.104040 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61bc599e-4476-4c80-9749-97b489f52a22-logs\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.104062 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-config\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.104083 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.107868 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61bc599e-4476-4c80-9749-97b489f52a22-logs\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.109314 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-db-sync-config-data\") pod \"barbican-db-sync-xnrzx\" (UID: \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\") " pod="openstack/barbican-db-sync-xnrzx" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.110163 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-combined-ca-bundle\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.112473 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-config-data\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.114204 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-combined-ca-bundle\") pod \"barbican-db-sync-xnrzx\" (UID: \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\") " pod="openstack/barbican-db-sync-xnrzx" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.120803 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-scripts\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.121879 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn8zj\" (UniqueName: \"kubernetes.io/projected/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-kube-api-access-kn8zj\") pod \"barbican-db-sync-xnrzx\" (UID: \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\") " pod="openstack/barbican-db-sync-xnrzx" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.126387 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpqpq\" (UniqueName: \"kubernetes.io/projected/61bc599e-4476-4c80-9749-97b489f52a22-kube-api-access-bpqpq\") pod \"placement-db-sync-7rf4s\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.168595 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-p6t4d"] Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.187890 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.206009 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.206053 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.206120 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt5gt\" (UniqueName: \"kubernetes.io/projected/e2523e8b-c047-426a-908a-5b91c0e764bd-kube-api-access-bt5gt\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.206154 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-config\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.206170 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.207049 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.207577 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.208125 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.208872 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-config\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.227946 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt5gt\" (UniqueName: \"kubernetes.io/projected/e2523e8b-c047-426a-908a-5b91c0e764bd-kube-api-access-bt5gt\") pod \"dnsmasq-dns-7987f74bbc-rmzbw\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: W1125 19:52:01.275043 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41c26c06_bec8_4207_ba4a_d25fd1a85a47.slice/crio-498e4a7d087ca726f68bac5e4db410d5c7f0940f038a9077868a7334e1fc2536 WatchSource:0}: Error finding container 498e4a7d087ca726f68bac5e4db410d5c7f0940f038a9077868a7334e1fc2536: Status 404 returned error can't find the container with id 498e4a7d087ca726f68bac5e4db410d5c7f0940f038a9077868a7334e1fc2536 Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.288411 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-hhnpn"] Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.340641 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xnrzx" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.350868 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.448556 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-256p6"] Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.498494 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6vb6j"] Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.539229 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:52:01 crc kubenswrapper[4775]: W1125 19:52:01.555947 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7477a30_1ee6_4d7e_83d2_7650c311ef6a.slice/crio-9084ada2ded7c56457a19f075fa073d5e53f17c5af36b47df16df9b8b6534af3 WatchSource:0}: Error finding container 9084ada2ded7c56457a19f075fa073d5e53f17c5af36b47df16df9b8b6534af3: Status 404 returned error can't find the container with id 9084ada2ded7c56457a19f075fa073d5e53f17c5af36b47df16df9b8b6534af3 Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.717359 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-rmzbw"] Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.728247 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7rf4s"] Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.843232 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xnrzx"] Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.964744 4775 generic.go:334] "Generic (PLEG): container finished" podID="41c26c06-bec8-4207-ba4a-d25fd1a85a47" containerID="b0855d461dc8d92cc0d789b200da5e3d1afbb6f1794fb7843c732d74f9fed94e" exitCode=0 Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.964807 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" event={"ID":"41c26c06-bec8-4207-ba4a-d25fd1a85a47","Type":"ContainerDied","Data":"b0855d461dc8d92cc0d789b200da5e3d1afbb6f1794fb7843c732d74f9fed94e"} Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.965103 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" event={"ID":"41c26c06-bec8-4207-ba4a-d25fd1a85a47","Type":"ContainerStarted","Data":"498e4a7d087ca726f68bac5e4db410d5c7f0940f038a9077868a7334e1fc2536"} Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.966976 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85f8651-1533-4c47-94a7-8d9e5114771d","Type":"ContainerStarted","Data":"06d4c853d8d9527ac8cc78ae2198c761a3f7267da54d0a9d7ab6e3e25f49b148"} Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.968457 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-256p6" event={"ID":"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1","Type":"ContainerStarted","Data":"e5105b81237ebcc4650e85e6a960fe98ba0a0a7ae3611d067275117aef61b711"} Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.968504 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-256p6" event={"ID":"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1","Type":"ContainerStarted","Data":"8372f07a18482c5de2aaae51cfef487bf4e9c46c404acb56768d88e9b4b544f2"} Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.971511 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" event={"ID":"e2523e8b-c047-426a-908a-5b91c0e764bd","Type":"ContainerStarted","Data":"38732b094030c12d59bd61604c3736dd800ea69085cf43c0bfda32b72d86cd96"} Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.971543 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" event={"ID":"e2523e8b-c047-426a-908a-5b91c0e764bd","Type":"ContainerStarted","Data":"e1f4220789a2cf9b7d00e7697e684c93780e16dac94ed4875c011939ab2ca88f"} Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.976831 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7rf4s" event={"ID":"61bc599e-4476-4c80-9749-97b489f52a22","Type":"ContainerStarted","Data":"aea11d1c323f38ddff1ce5a3113e3831115263c0ad65b1da01edbdafecb99636"} Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.984441 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6vb6j" event={"ID":"e7477a30-1ee6-4d7e-83d2-7650c311ef6a","Type":"ContainerStarted","Data":"fc714c37e611a72b165de6456e699bd6bbbde7d2a85d5e92737ac954f01e7d8b"} Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.984493 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6vb6j" event={"ID":"e7477a30-1ee6-4d7e-83d2-7650c311ef6a","Type":"ContainerStarted","Data":"9084ada2ded7c56457a19f075fa073d5e53f17c5af36b47df16df9b8b6534af3"} Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.994871 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p6t4d" event={"ID":"bc8913d5-5107-422d-8554-1d8c951253fd","Type":"ContainerStarted","Data":"63e9bce2424c0317fcc51a84d6a835d7168072ab4ffc3bef9eda8b8567cc989c"} Nov 25 19:52:01 crc kubenswrapper[4775]: I1125 19:52:01.996353 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xnrzx" event={"ID":"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f","Type":"ContainerStarted","Data":"6c7d08ac3419311ea380ebee566d7840164e0266d4f0ca3f35d167c79f2aeaf1"} Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.031225 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-256p6" podStartSLOduration=2.031202282 podStartE2EDuration="2.031202282s" podCreationTimestamp="2025-11-25 19:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:02.025111048 +0000 UTC m=+1103.941473444" watchObservedRunningTime="2025-11-25 19:52:02.031202282 +0000 UTC m=+1103.947564648" Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.308001 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.328513 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6vb6j" podStartSLOduration=2.32849845 podStartE2EDuration="2.32849845s" podCreationTimestamp="2025-11-25 19:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:02.052385769 +0000 UTC m=+1103.968748145" watchObservedRunningTime="2025-11-25 19:52:02.32849845 +0000 UTC m=+1104.244860816" Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.438028 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-ovsdbserver-nb\") pod \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.438114 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmtpx\" (UniqueName: \"kubernetes.io/projected/41c26c06-bec8-4207-ba4a-d25fd1a85a47-kube-api-access-tmtpx\") pod \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.438148 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-ovsdbserver-sb\") pod \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.438174 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-dns-svc\") pod \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.438200 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-config\") pod \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\" (UID: \"41c26c06-bec8-4207-ba4a-d25fd1a85a47\") " Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.445059 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c26c06-bec8-4207-ba4a-d25fd1a85a47-kube-api-access-tmtpx" (OuterVolumeSpecName: "kube-api-access-tmtpx") pod "41c26c06-bec8-4207-ba4a-d25fd1a85a47" (UID: "41c26c06-bec8-4207-ba4a-d25fd1a85a47"). InnerVolumeSpecName "kube-api-access-tmtpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.464958 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "41c26c06-bec8-4207-ba4a-d25fd1a85a47" (UID: "41c26c06-bec8-4207-ba4a-d25fd1a85a47"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.469768 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-config" (OuterVolumeSpecName: "config") pod "41c26c06-bec8-4207-ba4a-d25fd1a85a47" (UID: "41c26c06-bec8-4207-ba4a-d25fd1a85a47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.473020 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41c26c06-bec8-4207-ba4a-d25fd1a85a47" (UID: "41c26c06-bec8-4207-ba4a-d25fd1a85a47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.481712 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "41c26c06-bec8-4207-ba4a-d25fd1a85a47" (UID: "41c26c06-bec8-4207-ba4a-d25fd1a85a47"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.559238 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.559276 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmtpx\" (UniqueName: \"kubernetes.io/projected/41c26c06-bec8-4207-ba4a-d25fd1a85a47-kube-api-access-tmtpx\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.559289 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.559298 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.559307 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41c26c06-bec8-4207-ba4a-d25fd1a85a47-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:02 crc kubenswrapper[4775]: I1125 19:52:02.805938 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:52:03 crc kubenswrapper[4775]: I1125 19:52:03.026092 4775 generic.go:334] "Generic (PLEG): container finished" podID="e2523e8b-c047-426a-908a-5b91c0e764bd" containerID="38732b094030c12d59bd61604c3736dd800ea69085cf43c0bfda32b72d86cd96" exitCode=0 Nov 25 19:52:03 crc kubenswrapper[4775]: I1125 19:52:03.026491 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" event={"ID":"e2523e8b-c047-426a-908a-5b91c0e764bd","Type":"ContainerDied","Data":"38732b094030c12d59bd61604c3736dd800ea69085cf43c0bfda32b72d86cd96"} Nov 25 19:52:03 crc kubenswrapper[4775]: I1125 19:52:03.026548 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" event={"ID":"e2523e8b-c047-426a-908a-5b91c0e764bd","Type":"ContainerStarted","Data":"04018220d9b93fc6801b13ba6b2f15456e209fa7995d4e1c6bf774ca394cffda"} Nov 25 19:52:03 crc kubenswrapper[4775]: I1125 19:52:03.026617 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:03 crc kubenswrapper[4775]: I1125 19:52:03.029162 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" event={"ID":"41c26c06-bec8-4207-ba4a-d25fd1a85a47","Type":"ContainerDied","Data":"498e4a7d087ca726f68bac5e4db410d5c7f0940f038a9077868a7334e1fc2536"} Nov 25 19:52:03 crc kubenswrapper[4775]: I1125 19:52:03.029209 4775 scope.go:117] "RemoveContainer" containerID="b0855d461dc8d92cc0d789b200da5e3d1afbb6f1794fb7843c732d74f9fed94e" Nov 25 19:52:03 crc kubenswrapper[4775]: I1125 19:52:03.029328 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-hhnpn" Nov 25 19:52:03 crc kubenswrapper[4775]: I1125 19:52:03.050133 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" podStartSLOduration=3.050116139 podStartE2EDuration="3.050116139s" podCreationTimestamp="2025-11-25 19:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:03.041339653 +0000 UTC m=+1104.957702019" watchObservedRunningTime="2025-11-25 19:52:03.050116139 +0000 UTC m=+1104.966478495" Nov 25 19:52:03 crc kubenswrapper[4775]: I1125 19:52:03.085731 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-hhnpn"] Nov 25 19:52:03 crc kubenswrapper[4775]: I1125 19:52:03.090398 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-hhnpn"] Nov 25 19:52:04 crc kubenswrapper[4775]: I1125 19:52:04.858078 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c26c06-bec8-4207-ba4a-d25fd1a85a47" path="/var/lib/kubelet/pods/41c26c06-bec8-4207-ba4a-d25fd1a85a47/volumes" Nov 25 19:52:05 crc kubenswrapper[4775]: E1125 19:52:05.296086 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536c4362_3c5f_4f46_97d7_bae733d91ee7.slice/crio-918eee7106bf47190cb1a18df51d1680862153c7438ee4341e9fc072a2865390\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536c4362_3c5f_4f46_97d7_bae733d91ee7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b58db44_a88c_4d8a_803a_2ca1eeb0c6d1.slice/crio-e5105b81237ebcc4650e85e6a960fe98ba0a0a7ae3611d067275117aef61b711.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b58db44_a88c_4d8a_803a_2ca1eeb0c6d1.slice/crio-conmon-e5105b81237ebcc4650e85e6a960fe98ba0a0a7ae3611d067275117aef61b711.scope\": RecentStats: unable to find data in memory cache]" Nov 25 19:52:06 crc kubenswrapper[4775]: I1125 19:52:06.082426 4775 generic.go:334] "Generic (PLEG): container finished" podID="7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1" containerID="e5105b81237ebcc4650e85e6a960fe98ba0a0a7ae3611d067275117aef61b711" exitCode=0 Nov 25 19:52:06 crc kubenswrapper[4775]: I1125 19:52:06.082494 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-256p6" event={"ID":"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1","Type":"ContainerDied","Data":"e5105b81237ebcc4650e85e6a960fe98ba0a0a7ae3611d067275117aef61b711"} Nov 25 19:52:11 crc kubenswrapper[4775]: I1125 19:52:11.070897 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:52:11 crc kubenswrapper[4775]: I1125 19:52:11.073464 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:52:11 crc kubenswrapper[4775]: I1125 19:52:11.353722 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:11 crc kubenswrapper[4775]: I1125 19:52:11.445468 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-qn6h5"] Nov 25 19:52:11 crc kubenswrapper[4775]: I1125 19:52:11.445964 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" podUID="1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" containerName="dnsmasq-dns" containerID="cri-o://e67fb946cde1bec2112257c7ae7185cb66aff549f1dbea10e3350ab8c7e10500" gracePeriod=10 Nov 25 19:52:12 crc kubenswrapper[4775]: I1125 19:52:12.151318 4775 generic.go:334] "Generic (PLEG): container finished" podID="1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" containerID="e67fb946cde1bec2112257c7ae7185cb66aff549f1dbea10e3350ab8c7e10500" exitCode=0 Nov 25 19:52:12 crc kubenswrapper[4775]: I1125 19:52:12.151390 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" event={"ID":"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a","Type":"ContainerDied","Data":"e67fb946cde1bec2112257c7ae7185cb66aff549f1dbea10e3350ab8c7e10500"} Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.409357 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:13 crc kubenswrapper[4775]: E1125 19:52:13.469635 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 25 19:52:13 crc kubenswrapper[4775]: E1125 19:52:13.469895 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n74h578h5bdh588h5d8hbdh566hd5h5fch67h5c7h648h66h558h96h5bch5b5h694h5bhdbh649h5cdhfbh5c7h9fh576h587h8dh696h58dhcdh687q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pj2bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f85f8651-1533-4c47-94a7-8d9e5114771d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.508642 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-config-data\") pod \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.508817 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-fernet-keys\") pod \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.508892 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-combined-ca-bundle\") pod \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.508971 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkgbs\" (UniqueName: \"kubernetes.io/projected/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-kube-api-access-xkgbs\") pod \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.509211 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-credential-keys\") pod \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.509285 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-scripts\") pod \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\" (UID: \"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1\") " Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.515820 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1" (UID: "7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.525175 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1" (UID: "7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.525465 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-kube-api-access-xkgbs" (OuterVolumeSpecName: "kube-api-access-xkgbs") pod "7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1" (UID: "7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1"). InnerVolumeSpecName "kube-api-access-xkgbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.526858 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-scripts" (OuterVolumeSpecName: "scripts") pod "7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1" (UID: "7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.554225 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1" (UID: "7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.577977 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-config-data" (OuterVolumeSpecName: "config-data") pod "7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1" (UID: "7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.611293 4775 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.611334 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.611348 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.611360 4775 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.611371 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:13 crc kubenswrapper[4775]: I1125 19:52:13.611383 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkgbs\" (UniqueName: \"kubernetes.io/projected/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1-kube-api-access-xkgbs\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:14 crc kubenswrapper[4775]: E1125 19:52:14.038882 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 25 19:52:14 crc kubenswrapper[4775]: E1125 19:52:14.039171 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kn8zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-xnrzx_openstack(7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 19:52:14 crc kubenswrapper[4775]: E1125 19:52:14.040948 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-xnrzx" podUID="7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.180309 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-256p6" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.180415 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-256p6" event={"ID":"7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1","Type":"ContainerDied","Data":"8372f07a18482c5de2aaae51cfef487bf4e9c46c404acb56768d88e9b4b544f2"} Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.180455 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8372f07a18482c5de2aaae51cfef487bf4e9c46c404acb56768d88e9b4b544f2" Nov 25 19:52:14 crc kubenswrapper[4775]: E1125 19:52:14.182252 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-xnrzx" podUID="7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.513246 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-256p6"] Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.520177 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-256p6"] Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.589277 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-bln65"] Nov 25 19:52:14 crc kubenswrapper[4775]: E1125 19:52:14.589576 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1" containerName="keystone-bootstrap" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.589595 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1" containerName="keystone-bootstrap" Nov 25 19:52:14 crc kubenswrapper[4775]: E1125 19:52:14.589625 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c26c06-bec8-4207-ba4a-d25fd1a85a47" containerName="init" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.589631 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c26c06-bec8-4207-ba4a-d25fd1a85a47" containerName="init" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.589782 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1" containerName="keystone-bootstrap" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.589800 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c26c06-bec8-4207-ba4a-d25fd1a85a47" containerName="init" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.590282 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.592008 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.592456 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.592799 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.593004 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.593165 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xlssr" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.611699 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bln65"] Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.729870 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-config-data\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.729913 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-fernet-keys\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.729930 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-scripts\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.729977 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2st4\" (UniqueName: \"kubernetes.io/projected/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-kube-api-access-f2st4\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.730026 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-combined-ca-bundle\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.730080 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-credential-keys\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.831857 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-config-data\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.831901 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-fernet-keys\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.831917 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-scripts\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.831963 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2st4\" (UniqueName: \"kubernetes.io/projected/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-kube-api-access-f2st4\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.832011 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-combined-ca-bundle\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.832064 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-credential-keys\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.837789 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-scripts\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.839040 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-combined-ca-bundle\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.839147 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-config-data\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.851488 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-credential-keys\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.853084 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-fernet-keys\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.855128 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2st4\" (UniqueName: \"kubernetes.io/projected/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-kube-api-access-f2st4\") pod \"keystone-bootstrap-bln65\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.861090 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1" path="/var/lib/kubelet/pods/7b58db44-a88c-4d8a-803a-2ca1eeb0c6d1/volumes" Nov 25 19:52:14 crc kubenswrapper[4775]: I1125 19:52:14.917786 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:15 crc kubenswrapper[4775]: I1125 19:52:15.238484 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" podUID="1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Nov 25 19:52:15 crc kubenswrapper[4775]: E1125 19:52:15.522526 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536c4362_3c5f_4f46_97d7_bae733d91ee7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536c4362_3c5f_4f46_97d7_bae733d91ee7.slice/crio-918eee7106bf47190cb1a18df51d1680862153c7438ee4341e9fc072a2865390\": RecentStats: unable to find data in memory cache]" Nov 25 19:52:20 crc kubenswrapper[4775]: I1125 19:52:20.238883 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" podUID="1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Nov 25 19:52:22 crc kubenswrapper[4775]: E1125 19:52:22.848751 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 25 19:52:22 crc kubenswrapper[4775]: E1125 19:52:22.849799 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpk6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-p6t4d_openstack(bc8913d5-5107-422d-8554-1d8c951253fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 19:52:22 crc kubenswrapper[4775]: E1125 19:52:22.850947 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-p6t4d" podUID="bc8913d5-5107-422d-8554-1d8c951253fd" Nov 25 19:52:22 crc kubenswrapper[4775]: I1125 19:52:22.867410 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:52:22 crc kubenswrapper[4775]: I1125 19:52:22.998552 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-dns-svc\") pod \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " Nov 25 19:52:22 crc kubenswrapper[4775]: I1125 19:52:22.998600 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-ovsdbserver-sb\") pod \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " Nov 25 19:52:22 crc kubenswrapper[4775]: I1125 19:52:22.998699 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-ovsdbserver-nb\") pod \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " Nov 25 19:52:22 crc kubenswrapper[4775]: I1125 19:52:22.998791 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmqdp\" (UniqueName: \"kubernetes.io/projected/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-kube-api-access-bmqdp\") pod \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " Nov 25 19:52:22 crc kubenswrapper[4775]: I1125 19:52:22.998814 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-config\") pod \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\" (UID: \"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a\") " Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.010926 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-kube-api-access-bmqdp" (OuterVolumeSpecName: "kube-api-access-bmqdp") pod "1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" (UID: "1d9d8128-4199-4c1a-8e3d-ee81f10ef97a"). InnerVolumeSpecName "kube-api-access-bmqdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.047875 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" (UID: "1d9d8128-4199-4c1a-8e3d-ee81f10ef97a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.053138 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-config" (OuterVolumeSpecName: "config") pod "1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" (UID: "1d9d8128-4199-4c1a-8e3d-ee81f10ef97a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.055083 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" (UID: "1d9d8128-4199-4c1a-8e3d-ee81f10ef97a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.064826 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" (UID: "1d9d8128-4199-4c1a-8e3d-ee81f10ef97a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.100611 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmqdp\" (UniqueName: \"kubernetes.io/projected/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-kube-api-access-bmqdp\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.100661 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.100673 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.100682 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.100692 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.233489 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bln65"] Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.259610 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bln65" event={"ID":"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2","Type":"ContainerStarted","Data":"c7f6317050f54b89d5c4229cc9f7de2ddf5631914a3a1ef4c0b347c49e56d4a0"} Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.261961 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" event={"ID":"1d9d8128-4199-4c1a-8e3d-ee81f10ef97a","Type":"ContainerDied","Data":"e160b71d52a928f3f42d07aabb396f767aea2c05a39c13359cce97513a33a9c7"} Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.262019 4775 scope.go:117] "RemoveContainer" containerID="e67fb946cde1bec2112257c7ae7185cb66aff549f1dbea10e3350ab8c7e10500" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.262196 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-qn6h5" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.265195 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7rf4s" event={"ID":"61bc599e-4476-4c80-9749-97b489f52a22","Type":"ContainerStarted","Data":"78c08c37208edf0dae69e53f83802f661b18c1a65bce906d8edcbac01227b2e1"} Nov 25 19:52:23 crc kubenswrapper[4775]: E1125 19:52:23.277916 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-p6t4d" podUID="bc8913d5-5107-422d-8554-1d8c951253fd" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.296359 4775 scope.go:117] "RemoveContainer" containerID="b32428d88b96b909ac56dbf6a6dce670e26ac277ea01561dfb377f4aa1f55a2f" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.316232 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7rf4s" podStartSLOduration=3.515670138 podStartE2EDuration="23.316216435s" podCreationTimestamp="2025-11-25 19:52:00 +0000 UTC" firstStartedPulling="2025-11-25 19:52:01.745951743 +0000 UTC m=+1103.662314109" lastFinishedPulling="2025-11-25 19:52:21.54649803 +0000 UTC m=+1123.462860406" observedRunningTime="2025-11-25 19:52:23.302951268 +0000 UTC m=+1125.219313644" watchObservedRunningTime="2025-11-25 19:52:23.316216435 +0000 UTC m=+1125.232578791" Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.335771 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-qn6h5"] Nov 25 19:52:23 crc kubenswrapper[4775]: I1125 19:52:23.344223 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-qn6h5"] Nov 25 19:52:24 crc kubenswrapper[4775]: I1125 19:52:24.274146 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bln65" event={"ID":"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2","Type":"ContainerStarted","Data":"8377699798b0043f92dd3b8b71381aa0e025a66ecc06a5a996659e764c7bde5e"} Nov 25 19:52:24 crc kubenswrapper[4775]: I1125 19:52:24.277073 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85f8651-1533-4c47-94a7-8d9e5114771d","Type":"ContainerStarted","Data":"c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052"} Nov 25 19:52:24 crc kubenswrapper[4775]: I1125 19:52:24.300322 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-bln65" podStartSLOduration=10.300306912 podStartE2EDuration="10.300306912s" podCreationTimestamp="2025-11-25 19:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:24.296323606 +0000 UTC m=+1126.212685972" watchObservedRunningTime="2025-11-25 19:52:24.300306912 +0000 UTC m=+1126.216669278" Nov 25 19:52:24 crc kubenswrapper[4775]: I1125 19:52:24.856178 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" path="/var/lib/kubelet/pods/1d9d8128-4199-4c1a-8e3d-ee81f10ef97a/volumes" Nov 25 19:52:25 crc kubenswrapper[4775]: I1125 19:52:25.286314 4775 generic.go:334] "Generic (PLEG): container finished" podID="61bc599e-4476-4c80-9749-97b489f52a22" containerID="78c08c37208edf0dae69e53f83802f661b18c1a65bce906d8edcbac01227b2e1" exitCode=0 Nov 25 19:52:25 crc kubenswrapper[4775]: I1125 19:52:25.286503 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7rf4s" event={"ID":"61bc599e-4476-4c80-9749-97b489f52a22","Type":"ContainerDied","Data":"78c08c37208edf0dae69e53f83802f661b18c1a65bce906d8edcbac01227b2e1"} Nov 25 19:52:25 crc kubenswrapper[4775]: E1125 19:52:25.781529 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536c4362_3c5f_4f46_97d7_bae733d91ee7.slice/crio-918eee7106bf47190cb1a18df51d1680862153c7438ee4341e9fc072a2865390\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536c4362_3c5f_4f46_97d7_bae733d91ee7.slice\": RecentStats: unable to find data in memory cache]" Nov 25 19:52:27 crc kubenswrapper[4775]: I1125 19:52:27.307776 4775 generic.go:334] "Generic (PLEG): container finished" podID="b5fc0506-2ee8-4325-92cd-06f4a5cb61e2" containerID="8377699798b0043f92dd3b8b71381aa0e025a66ecc06a5a996659e764c7bde5e" exitCode=0 Nov 25 19:52:27 crc kubenswrapper[4775]: I1125 19:52:27.307864 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bln65" event={"ID":"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2","Type":"ContainerDied","Data":"8377699798b0043f92dd3b8b71381aa0e025a66ecc06a5a996659e764c7bde5e"} Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.287518 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.334392 4775 generic.go:334] "Generic (PLEG): container finished" podID="e7477a30-1ee6-4d7e-83d2-7650c311ef6a" containerID="fc714c37e611a72b165de6456e699bd6bbbde7d2a85d5e92737ac954f01e7d8b" exitCode=0 Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.334581 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6vb6j" event={"ID":"e7477a30-1ee6-4d7e-83d2-7650c311ef6a","Type":"ContainerDied","Data":"fc714c37e611a72b165de6456e699bd6bbbde7d2a85d5e92737ac954f01e7d8b"} Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.343168 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7rf4s" event={"ID":"61bc599e-4476-4c80-9749-97b489f52a22","Type":"ContainerDied","Data":"aea11d1c323f38ddff1ce5a3113e3831115263c0ad65b1da01edbdafecb99636"} Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.343245 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea11d1c323f38ddff1ce5a3113e3831115263c0ad65b1da01edbdafecb99636" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.343208 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7rf4s" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.415502 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61bc599e-4476-4c80-9749-97b489f52a22-logs\") pod \"61bc599e-4476-4c80-9749-97b489f52a22\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.415669 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-combined-ca-bundle\") pod \"61bc599e-4476-4c80-9749-97b489f52a22\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.415726 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-scripts\") pod \"61bc599e-4476-4c80-9749-97b489f52a22\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.415796 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpqpq\" (UniqueName: \"kubernetes.io/projected/61bc599e-4476-4c80-9749-97b489f52a22-kube-api-access-bpqpq\") pod \"61bc599e-4476-4c80-9749-97b489f52a22\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.415886 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-config-data\") pod \"61bc599e-4476-4c80-9749-97b489f52a22\" (UID: \"61bc599e-4476-4c80-9749-97b489f52a22\") " Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.415904 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61bc599e-4476-4c80-9749-97b489f52a22-logs" (OuterVolumeSpecName: "logs") pod "61bc599e-4476-4c80-9749-97b489f52a22" (UID: "61bc599e-4476-4c80-9749-97b489f52a22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.416358 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61bc599e-4476-4c80-9749-97b489f52a22-logs\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.421845 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61bc599e-4476-4c80-9749-97b489f52a22-kube-api-access-bpqpq" (OuterVolumeSpecName: "kube-api-access-bpqpq") pod "61bc599e-4476-4c80-9749-97b489f52a22" (UID: "61bc599e-4476-4c80-9749-97b489f52a22"). InnerVolumeSpecName "kube-api-access-bpqpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.447518 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-scripts" (OuterVolumeSpecName: "scripts") pod "61bc599e-4476-4c80-9749-97b489f52a22" (UID: "61bc599e-4476-4c80-9749-97b489f52a22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.458840 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61bc599e-4476-4c80-9749-97b489f52a22" (UID: "61bc599e-4476-4c80-9749-97b489f52a22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.460321 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-config-data" (OuterVolumeSpecName: "config-data") pod "61bc599e-4476-4c80-9749-97b489f52a22" (UID: "61bc599e-4476-4c80-9749-97b489f52a22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.520244 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.520277 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpqpq\" (UniqueName: \"kubernetes.io/projected/61bc599e-4476-4c80-9749-97b489f52a22-kube-api-access-bpqpq\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.520288 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.520297 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bc599e-4476-4c80-9749-97b489f52a22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.633247 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.722711 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-config-data\") pod \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.723043 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-scripts\") pod \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.723096 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2st4\" (UniqueName: \"kubernetes.io/projected/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-kube-api-access-f2st4\") pod \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.723130 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-combined-ca-bundle\") pod \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.723201 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-fernet-keys\") pod \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.723816 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-credential-keys\") pod \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\" (UID: \"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2\") " Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.726074 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-scripts" (OuterVolumeSpecName: "scripts") pod "b5fc0506-2ee8-4325-92cd-06f4a5cb61e2" (UID: "b5fc0506-2ee8-4325-92cd-06f4a5cb61e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.726422 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b5fc0506-2ee8-4325-92cd-06f4a5cb61e2" (UID: "b5fc0506-2ee8-4325-92cd-06f4a5cb61e2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.726538 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-kube-api-access-f2st4" (OuterVolumeSpecName: "kube-api-access-f2st4") pod "b5fc0506-2ee8-4325-92cd-06f4a5cb61e2" (UID: "b5fc0506-2ee8-4325-92cd-06f4a5cb61e2"). InnerVolumeSpecName "kube-api-access-f2st4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.727632 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b5fc0506-2ee8-4325-92cd-06f4a5cb61e2" (UID: "b5fc0506-2ee8-4325-92cd-06f4a5cb61e2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.744487 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-config-data" (OuterVolumeSpecName: "config-data") pod "b5fc0506-2ee8-4325-92cd-06f4a5cb61e2" (UID: "b5fc0506-2ee8-4325-92cd-06f4a5cb61e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.745676 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5fc0506-2ee8-4325-92cd-06f4a5cb61e2" (UID: "b5fc0506-2ee8-4325-92cd-06f4a5cb61e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.825970 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.826009 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2st4\" (UniqueName: \"kubernetes.io/projected/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-kube-api-access-f2st4\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.826025 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.826037 4775 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.826049 4775 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:28 crc kubenswrapper[4775]: I1125 19:52:28.826059 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.357375 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85f8651-1533-4c47-94a7-8d9e5114771d","Type":"ContainerStarted","Data":"39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6"} Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.363073 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bln65" event={"ID":"b5fc0506-2ee8-4325-92cd-06f4a5cb61e2","Type":"ContainerDied","Data":"c7f6317050f54b89d5c4229cc9f7de2ddf5631914a3a1ef4c0b347c49e56d4a0"} Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.363094 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bln65" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.363116 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7f6317050f54b89d5c4229cc9f7de2ddf5631914a3a1ef4c0b347c49e56d4a0" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.366009 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xnrzx" event={"ID":"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f","Type":"ContainerStarted","Data":"94f5bf9c2755ae1bbc2e249ee8e6e89eedc18597c932e6b27f4d48382f10c491"} Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.391379 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xnrzx" podStartSLOduration=2.792772447 podStartE2EDuration="29.39136202s" podCreationTimestamp="2025-11-25 19:52:00 +0000 UTC" firstStartedPulling="2025-11-25 19:52:01.853243685 +0000 UTC m=+1103.769606051" lastFinishedPulling="2025-11-25 19:52:28.451833258 +0000 UTC m=+1130.368195624" observedRunningTime="2025-11-25 19:52:29.383731374 +0000 UTC m=+1131.300093750" watchObservedRunningTime="2025-11-25 19:52:29.39136202 +0000 UTC m=+1131.307724386" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.515334 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-857879b544-hkmfq"] Nov 25 19:52:29 crc kubenswrapper[4775]: E1125 19:52:29.515678 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" containerName="dnsmasq-dns" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.515692 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" containerName="dnsmasq-dns" Nov 25 19:52:29 crc kubenswrapper[4775]: E1125 19:52:29.515704 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61bc599e-4476-4c80-9749-97b489f52a22" containerName="placement-db-sync" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.515712 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="61bc599e-4476-4c80-9749-97b489f52a22" containerName="placement-db-sync" Nov 25 19:52:29 crc kubenswrapper[4775]: E1125 19:52:29.515726 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5fc0506-2ee8-4325-92cd-06f4a5cb61e2" containerName="keystone-bootstrap" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.515731 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5fc0506-2ee8-4325-92cd-06f4a5cb61e2" containerName="keystone-bootstrap" Nov 25 19:52:29 crc kubenswrapper[4775]: E1125 19:52:29.515739 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" containerName="init" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.515745 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" containerName="init" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.515937 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="61bc599e-4476-4c80-9749-97b489f52a22" containerName="placement-db-sync" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.515961 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d9d8128-4199-4c1a-8e3d-ee81f10ef97a" containerName="dnsmasq-dns" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.515972 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5fc0506-2ee8-4325-92cd-06f4a5cb61e2" containerName="keystone-bootstrap" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.520507 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.525474 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.526901 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.527206 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xlssr" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.528941 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.529081 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.540906 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.549201 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-857879b544-hkmfq"] Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.571223 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f9dd67654-p257f"] Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.572950 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.575800 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-psflb" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.575949 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.576018 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.576101 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.577753 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.585930 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f9dd67654-p257f"] Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.642638 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-fernet-keys\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.642922 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-combined-ca-bundle\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.643131 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-internal-tls-certs\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.643437 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-config-data\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.643549 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-public-tls-certs\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.643710 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2zpv\" (UniqueName: \"kubernetes.io/projected/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-kube-api-access-m2zpv\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.644063 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-credential-keys\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.650889 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-scripts\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.720481 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6vb6j" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752368 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-fernet-keys\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752413 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-public-tls-certs\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752436 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m89rk\" (UniqueName: \"kubernetes.io/projected/db7604e6-c828-4d0f-9b60-9ae852dce0b7-kube-api-access-m89rk\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752494 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-combined-ca-bundle\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752513 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-combined-ca-bundle\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752543 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-internal-tls-certs\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752569 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-config-data\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752588 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-internal-tls-certs\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752606 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7604e6-c828-4d0f-9b60-9ae852dce0b7-logs\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752620 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-public-tls-certs\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752639 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2zpv\" (UniqueName: \"kubernetes.io/projected/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-kube-api-access-m2zpv\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752695 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-credential-keys\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752711 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-scripts\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752740 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-config-data\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.752758 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-scripts\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.762115 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-combined-ca-bundle\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.762481 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-internal-tls-certs\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.765455 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-credential-keys\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.765744 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-fernet-keys\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.767592 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-scripts\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.768682 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-config-data\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.771143 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-public-tls-certs\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.775643 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2zpv\" (UniqueName: \"kubernetes.io/projected/285f1de5-4e45-4b02-9ed9-b70b68f6b68d-kube-api-access-m2zpv\") pod \"keystone-857879b544-hkmfq\" (UID: \"285f1de5-4e45-4b02-9ed9-b70b68f6b68d\") " pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.853553 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l75t7\" (UniqueName: \"kubernetes.io/projected/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-kube-api-access-l75t7\") pod \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\" (UID: \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\") " Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.853879 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-config\") pod \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\" (UID: \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\") " Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.853990 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-combined-ca-bundle\") pod \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\" (UID: \"e7477a30-1ee6-4d7e-83d2-7650c311ef6a\") " Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.854256 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-combined-ca-bundle\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.854375 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-internal-tls-certs\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.854458 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7604e6-c828-4d0f-9b60-9ae852dce0b7-logs\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.854555 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-config-data\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.854631 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-scripts\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.854749 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-public-tls-certs\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.854817 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m89rk\" (UniqueName: \"kubernetes.io/projected/db7604e6-c828-4d0f-9b60-9ae852dce0b7-kube-api-access-m89rk\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.855182 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7604e6-c828-4d0f-9b60-9ae852dce0b7-logs\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.857716 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-kube-api-access-l75t7" (OuterVolumeSpecName: "kube-api-access-l75t7") pod "e7477a30-1ee6-4d7e-83d2-7650c311ef6a" (UID: "e7477a30-1ee6-4d7e-83d2-7650c311ef6a"). InnerVolumeSpecName "kube-api-access-l75t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.857990 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-internal-tls-certs\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.859048 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-combined-ca-bundle\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.859972 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-public-tls-certs\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.861256 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-scripts\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.861888 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7604e6-c828-4d0f-9b60-9ae852dce0b7-config-data\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.875594 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m89rk\" (UniqueName: \"kubernetes.io/projected/db7604e6-c828-4d0f-9b60-9ae852dce0b7-kube-api-access-m89rk\") pod \"placement-f9dd67654-p257f\" (UID: \"db7604e6-c828-4d0f-9b60-9ae852dce0b7\") " pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.878742 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.888200 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7477a30-1ee6-4d7e-83d2-7650c311ef6a" (UID: "e7477a30-1ee6-4d7e-83d2-7650c311ef6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.893712 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.898885 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-config" (OuterVolumeSpecName: "config") pod "e7477a30-1ee6-4d7e-83d2-7650c311ef6a" (UID: "e7477a30-1ee6-4d7e-83d2-7650c311ef6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.957239 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l75t7\" (UniqueName: \"kubernetes.io/projected/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-kube-api-access-l75t7\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.957304 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:29 crc kubenswrapper[4775]: I1125 19:52:29.957318 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7477a30-1ee6-4d7e-83d2-7650c311ef6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.390189 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f9dd67654-p257f"] Nov 25 19:52:30 crc kubenswrapper[4775]: W1125 19:52:30.391810 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb7604e6_c828_4d0f_9b60_9ae852dce0b7.slice/crio-9ae80f00f6d02506a25f604a5741f9565235b65f3771c58a269da33945d69242 WatchSource:0}: Error finding container 9ae80f00f6d02506a25f604a5741f9565235b65f3771c58a269da33945d69242: Status 404 returned error can't find the container with id 9ae80f00f6d02506a25f604a5741f9565235b65f3771c58a269da33945d69242 Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.393125 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6vb6j" event={"ID":"e7477a30-1ee6-4d7e-83d2-7650c311ef6a","Type":"ContainerDied","Data":"9084ada2ded7c56457a19f075fa073d5e53f17c5af36b47df16df9b8b6534af3"} Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.393158 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9084ada2ded7c56457a19f075fa073d5e53f17c5af36b47df16df9b8b6534af3" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.393210 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6vb6j" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.470754 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-857879b544-hkmfq"] Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.613544 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-r6w9v"] Nov 25 19:52:30 crc kubenswrapper[4775]: E1125 19:52:30.614018 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7477a30-1ee6-4d7e-83d2-7650c311ef6a" containerName="neutron-db-sync" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.614042 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7477a30-1ee6-4d7e-83d2-7650c311ef6a" containerName="neutron-db-sync" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.614221 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7477a30-1ee6-4d7e-83d2-7650c311ef6a" containerName="neutron-db-sync" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.617393 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.650450 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-r6w9v"] Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.778471 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-dns-svc\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.778536 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.778571 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.779430 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-config\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.779560 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wx4s\" (UniqueName: \"kubernetes.io/projected/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-kube-api-access-5wx4s\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.863707 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f8dccbcd8-gkj6h"] Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.865201 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.871673 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.871828 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.871932 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.872061 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6trrw" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.882284 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wx4s\" (UniqueName: \"kubernetes.io/projected/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-kube-api-access-5wx4s\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.882347 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-dns-svc\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.882379 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.882397 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.882424 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-config\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.885192 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.885637 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.886080 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-dns-svc\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.886283 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-config\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.886476 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f8dccbcd8-gkj6h"] Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.917463 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wx4s\" (UniqueName: \"kubernetes.io/projected/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-kube-api-access-5wx4s\") pod \"dnsmasq-dns-7b946d459c-r6w9v\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.983731 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-combined-ca-bundle\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.983808 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-httpd-config\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.983848 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-ovndb-tls-certs\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.983935 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfxrm\" (UniqueName: \"kubernetes.io/projected/0b683337-9591-44fc-8815-898878abf387-kube-api-access-xfxrm\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:30 crc kubenswrapper[4775]: I1125 19:52:30.983971 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-config\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.085679 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-httpd-config\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.085762 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-ovndb-tls-certs\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.085851 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfxrm\" (UniqueName: \"kubernetes.io/projected/0b683337-9591-44fc-8815-898878abf387-kube-api-access-xfxrm\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.085882 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-config\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.085960 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-combined-ca-bundle\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.089588 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-config\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.089885 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-httpd-config\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.091462 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-combined-ca-bundle\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.093230 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-ovndb-tls-certs\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.095224 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.102073 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfxrm\" (UniqueName: \"kubernetes.io/projected/0b683337-9591-44fc-8815-898878abf387-kube-api-access-xfxrm\") pod \"neutron-6f8dccbcd8-gkj6h\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.204773 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.407990 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-857879b544-hkmfq" event={"ID":"285f1de5-4e45-4b02-9ed9-b70b68f6b68d","Type":"ContainerStarted","Data":"5aed1e94972fd4ae80683fbee0f858f68599305f8880c0ab8332162653cf67e0"} Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.408365 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.408382 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-857879b544-hkmfq" event={"ID":"285f1de5-4e45-4b02-9ed9-b70b68f6b68d","Type":"ContainerStarted","Data":"d071b553bf17c2e068b1def8e09b1de15cf697028a015e22bc3c914ec53e8864"} Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.409715 4775 generic.go:334] "Generic (PLEG): container finished" podID="7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f" containerID="94f5bf9c2755ae1bbc2e249ee8e6e89eedc18597c932e6b27f4d48382f10c491" exitCode=0 Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.409782 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xnrzx" event={"ID":"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f","Type":"ContainerDied","Data":"94f5bf9c2755ae1bbc2e249ee8e6e89eedc18597c932e6b27f4d48382f10c491"} Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.413045 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f9dd67654-p257f" event={"ID":"db7604e6-c828-4d0f-9b60-9ae852dce0b7","Type":"ContainerStarted","Data":"05d3b2ed834f6ffb02341d22e216e732bcdb27279846b6ab6e22036ada6d7fe7"} Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.413088 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f9dd67654-p257f" event={"ID":"db7604e6-c828-4d0f-9b60-9ae852dce0b7","Type":"ContainerStarted","Data":"98853a15818c177cc306c6baaa2d80e8b845776027f88cc2e241a33dfabb5f35"} Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.413101 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f9dd67654-p257f" event={"ID":"db7604e6-c828-4d0f-9b60-9ae852dce0b7","Type":"ContainerStarted","Data":"9ae80f00f6d02506a25f604a5741f9565235b65f3771c58a269da33945d69242"} Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.413183 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.413332 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f9dd67654-p257f" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.460391 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-857879b544-hkmfq" podStartSLOduration=2.460377031 podStartE2EDuration="2.460377031s" podCreationTimestamp="2025-11-25 19:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:31.427084847 +0000 UTC m=+1133.343447233" watchObservedRunningTime="2025-11-25 19:52:31.460377031 +0000 UTC m=+1133.376739397" Nov 25 19:52:31 crc kubenswrapper[4775]: I1125 19:52:31.468891 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f9dd67654-p257f" podStartSLOduration=2.46887822 podStartE2EDuration="2.46887822s" podCreationTimestamp="2025-11-25 19:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:31.458277486 +0000 UTC m=+1133.374639852" watchObservedRunningTime="2025-11-25 19:52:31.46887822 +0000 UTC m=+1133.385240586" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:31.658236 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-r6w9v"] Nov 25 19:52:33 crc kubenswrapper[4775]: W1125 19:52:31.686279 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1daa3845_9ffe_44a9_b131_7229a4fd2b3e.slice/crio-e70e732bd18a1b23b90d02578392cc4aad90dc30a2454f1b8b24de812df22dbe WatchSource:0}: Error finding container e70e732bd18a1b23b90d02578392cc4aad90dc30a2454f1b8b24de812df22dbe: Status 404 returned error can't find the container with id e70e732bd18a1b23b90d02578392cc4aad90dc30a2454f1b8b24de812df22dbe Nov 25 19:52:33 crc kubenswrapper[4775]: W1125 19:52:31.910533 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b683337_9591_44fc_8815_898878abf387.slice/crio-d6699d7ec90a951375a3f39415b0f9a397400c8afbec0f2220122cf9e7838c3c WatchSource:0}: Error finding container d6699d7ec90a951375a3f39415b0f9a397400c8afbec0f2220122cf9e7838c3c: Status 404 returned error can't find the container with id d6699d7ec90a951375a3f39415b0f9a397400c8afbec0f2220122cf9e7838c3c Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:31.912196 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f8dccbcd8-gkj6h"] Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:32.421078 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f8dccbcd8-gkj6h" event={"ID":"0b683337-9591-44fc-8815-898878abf387","Type":"ContainerStarted","Data":"d6699d7ec90a951375a3f39415b0f9a397400c8afbec0f2220122cf9e7838c3c"} Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:32.423114 4775 generic.go:334] "Generic (PLEG): container finished" podID="1daa3845-9ffe-44a9-b131-7229a4fd2b3e" containerID="af843ba12bd2001f0e7bf686ea6072e21c45a50a4b11d7211525f23e18af7a2a" exitCode=0 Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:32.423154 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" event={"ID":"1daa3845-9ffe-44a9-b131-7229a4fd2b3e","Type":"ContainerDied","Data":"af843ba12bd2001f0e7bf686ea6072e21c45a50a4b11d7211525f23e18af7a2a"} Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:32.423190 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" event={"ID":"1daa3845-9ffe-44a9-b131-7229a4fd2b3e","Type":"ContainerStarted","Data":"e70e732bd18a1b23b90d02578392cc4aad90dc30a2454f1b8b24de812df22dbe"} Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.179830 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5fd788f7d7-czrxl"] Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.185544 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.187253 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.187519 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.190962 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fd788f7d7-czrxl"] Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.230112 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-public-tls-certs\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.230232 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-combined-ca-bundle\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.230271 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-ovndb-tls-certs\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.230301 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrm9\" (UniqueName: \"kubernetes.io/projected/9017fd2b-6435-43eb-8c16-85894d4713e9-kube-api-access-cqrm9\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.230330 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-config\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.230368 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-httpd-config\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.230407 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-internal-tls-certs\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.332898 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-httpd-config\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.333008 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-internal-tls-certs\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.333083 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-public-tls-certs\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.333195 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-combined-ca-bundle\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.333245 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-ovndb-tls-certs\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.333288 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqrm9\" (UniqueName: \"kubernetes.io/projected/9017fd2b-6435-43eb-8c16-85894d4713e9-kube-api-access-cqrm9\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.333333 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-config\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.339188 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-httpd-config\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.339785 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-ovndb-tls-certs\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.341474 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-internal-tls-certs\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.342025 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-config\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.342374 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-combined-ca-bundle\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.343498 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9017fd2b-6435-43eb-8c16-85894d4713e9-public-tls-certs\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.355320 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqrm9\" (UniqueName: \"kubernetes.io/projected/9017fd2b-6435-43eb-8c16-85894d4713e9-kube-api-access-cqrm9\") pod \"neutron-5fd788f7d7-czrxl\" (UID: \"9017fd2b-6435-43eb-8c16-85894d4713e9\") " pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.432091 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f8dccbcd8-gkj6h" event={"ID":"0b683337-9591-44fc-8815-898878abf387","Type":"ContainerStarted","Data":"34fa1d3e31f5a23de2c8564da649d0861c5ab1a3bb99333d690c0a3747987909"} Nov 25 19:52:33 crc kubenswrapper[4775]: I1125 19:52:33.511402 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:34 crc kubenswrapper[4775]: I1125 19:52:34.259846 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fd788f7d7-czrxl"] Nov 25 19:52:34 crc kubenswrapper[4775]: I1125 19:52:34.445219 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f8dccbcd8-gkj6h" event={"ID":"0b683337-9591-44fc-8815-898878abf387","Type":"ContainerStarted","Data":"a424cfcdcb7ed66a5df7ce811658598803456291bc0c6d5b6832794e3d1bd219"} Nov 25 19:52:34 crc kubenswrapper[4775]: I1125 19:52:34.445337 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:52:34 crc kubenswrapper[4775]: I1125 19:52:34.454837 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" event={"ID":"1daa3845-9ffe-44a9-b131-7229a4fd2b3e","Type":"ContainerStarted","Data":"9e61964b5d399aee4ae4714bc1a6b5a83d8929106e93553b4c547e21af57664d"} Nov 25 19:52:34 crc kubenswrapper[4775]: I1125 19:52:34.456149 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:34 crc kubenswrapper[4775]: I1125 19:52:34.475839 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f8dccbcd8-gkj6h" podStartSLOduration=4.47581966 podStartE2EDuration="4.47581966s" podCreationTimestamp="2025-11-25 19:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:34.464217419 +0000 UTC m=+1136.380579795" watchObservedRunningTime="2025-11-25 19:52:34.47581966 +0000 UTC m=+1136.392182046" Nov 25 19:52:34 crc kubenswrapper[4775]: I1125 19:52:34.491521 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" podStartSLOduration=4.491502161 podStartE2EDuration="4.491502161s" podCreationTimestamp="2025-11-25 19:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:34.482102838 +0000 UTC m=+1136.398465214" watchObservedRunningTime="2025-11-25 19:52:34.491502161 +0000 UTC m=+1136.407864547" Nov 25 19:52:36 crc kubenswrapper[4775]: E1125 19:52:36.001550 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536c4362_3c5f_4f46_97d7_bae733d91ee7.slice/crio-918eee7106bf47190cb1a18df51d1680862153c7438ee4341e9fc072a2865390\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536c4362_3c5f_4f46_97d7_bae733d91ee7.slice\": RecentStats: unable to find data in memory cache]" Nov 25 19:52:36 crc kubenswrapper[4775]: I1125 19:52:36.787034 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xnrzx" Nov 25 19:52:36 crc kubenswrapper[4775]: I1125 19:52:36.921756 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-db-sync-config-data\") pod \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\" (UID: \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\") " Nov 25 19:52:36 crc kubenswrapper[4775]: I1125 19:52:36.921928 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn8zj\" (UniqueName: \"kubernetes.io/projected/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-kube-api-access-kn8zj\") pod \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\" (UID: \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\") " Nov 25 19:52:36 crc kubenswrapper[4775]: I1125 19:52:36.922025 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-combined-ca-bundle\") pod \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\" (UID: \"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f\") " Nov 25 19:52:36 crc kubenswrapper[4775]: I1125 19:52:36.937108 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f" (UID: "7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:36 crc kubenswrapper[4775]: I1125 19:52:36.937253 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-kube-api-access-kn8zj" (OuterVolumeSpecName: "kube-api-access-kn8zj") pod "7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f" (UID: "7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f"). InnerVolumeSpecName "kube-api-access-kn8zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:52:36 crc kubenswrapper[4775]: I1125 19:52:36.962176 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f" (UID: "7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:37 crc kubenswrapper[4775]: I1125 19:52:37.024143 4775 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:37 crc kubenswrapper[4775]: I1125 19:52:37.024182 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn8zj\" (UniqueName: \"kubernetes.io/projected/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-kube-api-access-kn8zj\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:37 crc kubenswrapper[4775]: I1125 19:52:37.024215 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:37 crc kubenswrapper[4775]: I1125 19:52:37.486839 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xnrzx" event={"ID":"7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f","Type":"ContainerDied","Data":"6c7d08ac3419311ea380ebee566d7840164e0266d4f0ca3f35d167c79f2aeaf1"} Nov 25 19:52:37 crc kubenswrapper[4775]: I1125 19:52:37.486893 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c7d08ac3419311ea380ebee566d7840164e0266d4f0ca3f35d167c79f2aeaf1" Nov 25 19:52:37 crc kubenswrapper[4775]: I1125 19:52:37.486949 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xnrzx" Nov 25 19:52:37 crc kubenswrapper[4775]: I1125 19:52:37.487953 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fd788f7d7-czrxl" event={"ID":"9017fd2b-6435-43eb-8c16-85894d4713e9","Type":"ContainerStarted","Data":"649f25ff755e2075a278a79ec9d52060744d57fe413c4003318a41af1f4c44b1"} Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.085502 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5bcd58f99f-7glc6"] Nov 25 19:52:38 crc kubenswrapper[4775]: E1125 19:52:38.086224 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f" containerName="barbican-db-sync" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.086237 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f" containerName="barbican-db-sync" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.086392 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f" containerName="barbican-db-sync" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.087214 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.089708 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.101218 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-8kq4w" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.102206 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.104283 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5bcd58f99f-7glc6"] Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.132523 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-54587fd766-l4dln"] Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.133818 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.141131 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.142019 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-config-data\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.142097 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv7n9\" (UniqueName: \"kubernetes.io/projected/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-kube-api-access-wv7n9\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.142146 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-combined-ca-bundle\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.142166 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-config-data-custom\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.142188 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-logs\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.174405 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54587fd766-l4dln"] Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.204116 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-r6w9v"] Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.204475 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" podUID="1daa3845-9ffe-44a9-b131-7229a4fd2b3e" containerName="dnsmasq-dns" containerID="cri-o://9e61964b5d399aee4ae4714bc1a6b5a83d8929106e93553b4c547e21af57664d" gracePeriod=10 Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.206760 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.245151 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4832609-d922-4c24-9b69-a9fbd2de6c86-config-data-custom\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.245227 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv7n9\" (UniqueName: \"kubernetes.io/projected/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-kube-api-access-wv7n9\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.245255 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4832609-d922-4c24-9b69-a9fbd2de6c86-combined-ca-bundle\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.245358 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2lnr\" (UniqueName: \"kubernetes.io/projected/f4832609-d922-4c24-9b69-a9fbd2de6c86-kube-api-access-n2lnr\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.245387 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-combined-ca-bundle\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.245409 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4832609-d922-4c24-9b69-a9fbd2de6c86-config-data\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.245437 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-config-data-custom\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.245485 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-logs\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.245515 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4832609-d922-4c24-9b69-a9fbd2de6c86-logs\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.245585 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-config-data\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.250440 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-rbqq6"] Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.251965 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.252860 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-logs\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.259079 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-config-data\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.262083 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-config-data-custom\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.267696 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-combined-ca-bundle\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.274076 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv7n9\" (UniqueName: \"kubernetes.io/projected/520e459e-76e8-4e4b-8e81-3eacb6bfe1c8-kube-api-access-wv7n9\") pod \"barbican-worker-5bcd58f99f-7glc6\" (UID: \"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8\") " pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.282316 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-rbqq6"] Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.319026 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-9749f886-zqznl"] Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.320462 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.325724 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.331168 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9749f886-zqznl"] Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.351778 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2lnr\" (UniqueName: \"kubernetes.io/projected/f4832609-d922-4c24-9b69-a9fbd2de6c86-kube-api-access-n2lnr\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.351820 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4832609-d922-4c24-9b69-a9fbd2de6c86-config-data\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.351869 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4832609-d922-4c24-9b69-a9fbd2de6c86-logs\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.351908 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-config\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.351990 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwpk\" (UniqueName: \"kubernetes.io/projected/0ab881e6-b35e-44b4-adc6-5c176618f3c2-kube-api-access-sjwpk\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.352021 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4832609-d922-4c24-9b69-a9fbd2de6c86-config-data-custom\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.352048 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.352073 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.352100 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4832609-d922-4c24-9b69-a9fbd2de6c86-combined-ca-bundle\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.352127 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-dns-svc\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.353996 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4832609-d922-4c24-9b69-a9fbd2de6c86-logs\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.355546 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4832609-d922-4c24-9b69-a9fbd2de6c86-config-data-custom\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.356806 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4832609-d922-4c24-9b69-a9fbd2de6c86-config-data\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.359074 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4832609-d922-4c24-9b69-a9fbd2de6c86-combined-ca-bundle\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.368568 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2lnr\" (UniqueName: \"kubernetes.io/projected/f4832609-d922-4c24-9b69-a9fbd2de6c86-kube-api-access-n2lnr\") pod \"barbican-keystone-listener-54587fd766-l4dln\" (UID: \"f4832609-d922-4c24-9b69-a9fbd2de6c86\") " pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.414850 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bcd58f99f-7glc6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.450048 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54587fd766-l4dln" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.455578 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8fwm\" (UniqueName: \"kubernetes.io/projected/4a06ed2e-ff90-4b5c-92b1-3e102255820d-kube-api-access-c8fwm\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.455632 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-config\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.455676 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-config-data-custom\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.455728 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-config-data\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.455761 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a06ed2e-ff90-4b5c-92b1-3e102255820d-logs\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.455792 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjwpk\" (UniqueName: \"kubernetes.io/projected/0ab881e6-b35e-44b4-adc6-5c176618f3c2-kube-api-access-sjwpk\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.455819 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.455874 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.455909 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-dns-svc\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.455943 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-combined-ca-bundle\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.456762 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-config\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.456885 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.456897 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-dns-svc\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.456937 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.472362 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjwpk\" (UniqueName: \"kubernetes.io/projected/0ab881e6-b35e-44b4-adc6-5c176618f3c2-kube-api-access-sjwpk\") pod \"dnsmasq-dns-6bb684768f-rbqq6\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.557850 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-config-data-custom\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.557918 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-config-data\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.557946 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a06ed2e-ff90-4b5c-92b1-3e102255820d-logs\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.558037 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-combined-ca-bundle\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.558080 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8fwm\" (UniqueName: \"kubernetes.io/projected/4a06ed2e-ff90-4b5c-92b1-3e102255820d-kube-api-access-c8fwm\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.560790 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a06ed2e-ff90-4b5c-92b1-3e102255820d-logs\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.564562 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-config-data\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.566119 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-combined-ca-bundle\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.566502 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-config-data-custom\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.602369 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8fwm\" (UniqueName: \"kubernetes.io/projected/4a06ed2e-ff90-4b5c-92b1-3e102255820d-kube-api-access-c8fwm\") pod \"barbican-api-9749f886-zqznl\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.732455 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:38 crc kubenswrapper[4775]: I1125 19:52:38.738912 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.167277 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:39 crc kubenswrapper[4775]: E1125 19:52:39.266178 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.276607 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-config\") pod \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.276935 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-dns-svc\") pod \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.276975 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-ovsdbserver-nb\") pod \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.277128 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-ovsdbserver-sb\") pod \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.277171 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wx4s\" (UniqueName: \"kubernetes.io/projected/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-kube-api-access-5wx4s\") pod \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\" (UID: \"1daa3845-9ffe-44a9-b131-7229a4fd2b3e\") " Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.282073 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-kube-api-access-5wx4s" (OuterVolumeSpecName: "kube-api-access-5wx4s") pod "1daa3845-9ffe-44a9-b131-7229a4fd2b3e" (UID: "1daa3845-9ffe-44a9-b131-7229a4fd2b3e"). InnerVolumeSpecName "kube-api-access-5wx4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.333035 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-config" (OuterVolumeSpecName: "config") pod "1daa3845-9ffe-44a9-b131-7229a4fd2b3e" (UID: "1daa3845-9ffe-44a9-b131-7229a4fd2b3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.335822 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1daa3845-9ffe-44a9-b131-7229a4fd2b3e" (UID: "1daa3845-9ffe-44a9-b131-7229a4fd2b3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.343071 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1daa3845-9ffe-44a9-b131-7229a4fd2b3e" (UID: "1daa3845-9ffe-44a9-b131-7229a4fd2b3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.372944 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1daa3845-9ffe-44a9-b131-7229a4fd2b3e" (UID: "1daa3845-9ffe-44a9-b131-7229a4fd2b3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.379124 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.379155 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.379164 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.379173 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.379181 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wx4s\" (UniqueName: \"kubernetes.io/projected/1daa3845-9ffe-44a9-b131-7229a4fd2b3e-kube-api-access-5wx4s\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.493945 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9749f886-zqznl"] Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.504779 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5bcd58f99f-7glc6"] Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.507818 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85f8651-1533-4c47-94a7-8d9e5114771d","Type":"ContainerStarted","Data":"c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2"} Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.507975 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerName="ceilometer-notification-agent" containerID="cri-o://c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052" gracePeriod=30 Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.508057 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerName="proxy-httpd" containerID="cri-o://c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2" gracePeriod=30 Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.508065 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.508093 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerName="sg-core" containerID="cri-o://39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6" gracePeriod=30 Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.510578 4775 generic.go:334] "Generic (PLEG): container finished" podID="1daa3845-9ffe-44a9-b131-7229a4fd2b3e" containerID="9e61964b5d399aee4ae4714bc1a6b5a83d8929106e93553b4c547e21af57664d" exitCode=0 Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.510684 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" event={"ID":"1daa3845-9ffe-44a9-b131-7229a4fd2b3e","Type":"ContainerDied","Data":"9e61964b5d399aee4ae4714bc1a6b5a83d8929106e93553b4c547e21af57664d"} Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.510698 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.510725 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-r6w9v" event={"ID":"1daa3845-9ffe-44a9-b131-7229a4fd2b3e","Type":"ContainerDied","Data":"e70e732bd18a1b23b90d02578392cc4aad90dc30a2454f1b8b24de812df22dbe"} Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.510766 4775 scope.go:117] "RemoveContainer" containerID="9e61964b5d399aee4ae4714bc1a6b5a83d8929106e93553b4c547e21af57664d" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.514160 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fd788f7d7-czrxl" event={"ID":"9017fd2b-6435-43eb-8c16-85894d4713e9","Type":"ContainerStarted","Data":"6c9c3ea25fb4030f7101056b375a4f6be624357a09c207b7ea8d3b37b19153fc"} Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.514193 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fd788f7d7-czrxl" event={"ID":"9017fd2b-6435-43eb-8c16-85894d4713e9","Type":"ContainerStarted","Data":"f659a63650b2fca94feca1c9a53396a344459b33e2eadbbf8fb24a7a8a3f223a"} Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.514988 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:52:39 crc kubenswrapper[4775]: W1125 19:52:39.538544 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a06ed2e_ff90_4b5c_92b1_3e102255820d.slice/crio-e8237beda97430b9eed191b77bfba4939615f4b3d18b78b6093dd15ed090986a WatchSource:0}: Error finding container e8237beda97430b9eed191b77bfba4939615f4b3d18b78b6093dd15ed090986a: Status 404 returned error can't find the container with id e8237beda97430b9eed191b77bfba4939615f4b3d18b78b6093dd15ed090986a Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.554953 4775 scope.go:117] "RemoveContainer" containerID="af843ba12bd2001f0e7bf686ea6072e21c45a50a4b11d7211525f23e18af7a2a" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.573857 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5fd788f7d7-czrxl" podStartSLOduration=6.573836624 podStartE2EDuration="6.573836624s" podCreationTimestamp="2025-11-25 19:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:39.550869888 +0000 UTC m=+1141.467232254" watchObservedRunningTime="2025-11-25 19:52:39.573836624 +0000 UTC m=+1141.490198990" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.582230 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-r6w9v"] Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.588144 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-r6w9v"] Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.590399 4775 scope.go:117] "RemoveContainer" containerID="9e61964b5d399aee4ae4714bc1a6b5a83d8929106e93553b4c547e21af57664d" Nov 25 19:52:39 crc kubenswrapper[4775]: E1125 19:52:39.590782 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e61964b5d399aee4ae4714bc1a6b5a83d8929106e93553b4c547e21af57664d\": container with ID starting with 9e61964b5d399aee4ae4714bc1a6b5a83d8929106e93553b4c547e21af57664d not found: ID does not exist" containerID="9e61964b5d399aee4ae4714bc1a6b5a83d8929106e93553b4c547e21af57664d" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.590808 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e61964b5d399aee4ae4714bc1a6b5a83d8929106e93553b4c547e21af57664d"} err="failed to get container status \"9e61964b5d399aee4ae4714bc1a6b5a83d8929106e93553b4c547e21af57664d\": rpc error: code = NotFound desc = could not find container \"9e61964b5d399aee4ae4714bc1a6b5a83d8929106e93553b4c547e21af57664d\": container with ID starting with 9e61964b5d399aee4ae4714bc1a6b5a83d8929106e93553b4c547e21af57664d not found: ID does not exist" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.590827 4775 scope.go:117] "RemoveContainer" containerID="af843ba12bd2001f0e7bf686ea6072e21c45a50a4b11d7211525f23e18af7a2a" Nov 25 19:52:39 crc kubenswrapper[4775]: E1125 19:52:39.591140 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af843ba12bd2001f0e7bf686ea6072e21c45a50a4b11d7211525f23e18af7a2a\": container with ID starting with af843ba12bd2001f0e7bf686ea6072e21c45a50a4b11d7211525f23e18af7a2a not found: ID does not exist" containerID="af843ba12bd2001f0e7bf686ea6072e21c45a50a4b11d7211525f23e18af7a2a" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.591162 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af843ba12bd2001f0e7bf686ea6072e21c45a50a4b11d7211525f23e18af7a2a"} err="failed to get container status \"af843ba12bd2001f0e7bf686ea6072e21c45a50a4b11d7211525f23e18af7a2a\": rpc error: code = NotFound desc = could not find container \"af843ba12bd2001f0e7bf686ea6072e21c45a50a4b11d7211525f23e18af7a2a\": container with ID starting with af843ba12bd2001f0e7bf686ea6072e21c45a50a4b11d7211525f23e18af7a2a not found: ID does not exist" Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.628691 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-rbqq6"] Nov 25 19:52:39 crc kubenswrapper[4775]: I1125 19:52:39.720376 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54587fd766-l4dln"] Nov 25 19:52:39 crc kubenswrapper[4775]: W1125 19:52:39.732906 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4832609_d922_4c24_9b69_a9fbd2de6c86.slice/crio-6ceab75f79a99e9341900ad7dbdf7c04d471102420bce5a4fb0df75c27c680d6 WatchSource:0}: Error finding container 6ceab75f79a99e9341900ad7dbdf7c04d471102420bce5a4fb0df75c27c680d6: Status 404 returned error can't find the container with id 6ceab75f79a99e9341900ad7dbdf7c04d471102420bce5a4fb0df75c27c680d6 Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.507426 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.533460 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9749f886-zqznl" event={"ID":"4a06ed2e-ff90-4b5c-92b1-3e102255820d","Type":"ContainerStarted","Data":"98930cebda0f7476d2285a843cfabe5cfbfa227b0e3242fee8befb0381e48546"} Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.533508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9749f886-zqznl" event={"ID":"4a06ed2e-ff90-4b5c-92b1-3e102255820d","Type":"ContainerStarted","Data":"a8e3b6a3737e72607ddcf645fee25a258efb376d46d2b5056e6d6ea1de91beea"} Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.533709 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.533935 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.533955 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9749f886-zqznl" event={"ID":"4a06ed2e-ff90-4b5c-92b1-3e102255820d","Type":"ContainerStarted","Data":"e8237beda97430b9eed191b77bfba4939615f4b3d18b78b6093dd15ed090986a"} Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.541968 4775 generic.go:334] "Generic (PLEG): container finished" podID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerID="c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2" exitCode=0 Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.542185 4775 generic.go:334] "Generic (PLEG): container finished" podID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerID="39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6" exitCode=2 Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.542193 4775 generic.go:334] "Generic (PLEG): container finished" podID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerID="c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052" exitCode=0 Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.542284 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85f8651-1533-4c47-94a7-8d9e5114771d","Type":"ContainerDied","Data":"c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2"} Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.542345 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85f8651-1533-4c47-94a7-8d9e5114771d","Type":"ContainerDied","Data":"39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6"} Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.542358 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85f8651-1533-4c47-94a7-8d9e5114771d","Type":"ContainerDied","Data":"c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052"} Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.542368 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85f8651-1533-4c47-94a7-8d9e5114771d","Type":"ContainerDied","Data":"06d4c853d8d9527ac8cc78ae2198c761a3f7267da54d0a9d7ab6e3e25f49b148"} Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.542431 4775 scope.go:117] "RemoveContainer" containerID="c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.542573 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.553679 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-9749f886-zqznl" podStartSLOduration=2.553665247 podStartE2EDuration="2.553665247s" podCreationTimestamp="2025-11-25 19:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:40.552769943 +0000 UTC m=+1142.469132309" watchObservedRunningTime="2025-11-25 19:52:40.553665247 +0000 UTC m=+1142.470027613" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.554806 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54587fd766-l4dln" event={"ID":"f4832609-d922-4c24-9b69-a9fbd2de6c86","Type":"ContainerStarted","Data":"6ceab75f79a99e9341900ad7dbdf7c04d471102420bce5a4fb0df75c27c680d6"} Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.587773 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bcd58f99f-7glc6" event={"ID":"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8","Type":"ContainerStarted","Data":"32fc8c594284f00f49b53c5ee67ec1bf6cc6ef87ea2d04f52af2eed5086b2383"} Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.590873 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p6t4d" event={"ID":"bc8913d5-5107-422d-8554-1d8c951253fd","Type":"ContainerStarted","Data":"1110b5a72bf12a9b230e59dcd5b13a3af717d43db6dea5a764d2ef66c4ae7762"} Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.597854 4775 generic.go:334] "Generic (PLEG): container finished" podID="0ab881e6-b35e-44b4-adc6-5c176618f3c2" containerID="a826fd524e5088945582e968fede869a031d4e9555379f9faa0cd3d52c4fbf1c" exitCode=0 Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.599010 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-combined-ca-bundle\") pod \"f85f8651-1533-4c47-94a7-8d9e5114771d\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.599066 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85f8651-1533-4c47-94a7-8d9e5114771d-run-httpd\") pod \"f85f8651-1533-4c47-94a7-8d9e5114771d\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.599093 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj2bp\" (UniqueName: \"kubernetes.io/projected/f85f8651-1533-4c47-94a7-8d9e5114771d-kube-api-access-pj2bp\") pod \"f85f8651-1533-4c47-94a7-8d9e5114771d\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.599122 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-sg-core-conf-yaml\") pod \"f85f8651-1533-4c47-94a7-8d9e5114771d\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.599236 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-config-data\") pod \"f85f8651-1533-4c47-94a7-8d9e5114771d\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.599272 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-scripts\") pod \"f85f8651-1533-4c47-94a7-8d9e5114771d\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.599309 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85f8651-1533-4c47-94a7-8d9e5114771d-log-httpd\") pod \"f85f8651-1533-4c47-94a7-8d9e5114771d\" (UID: \"f85f8651-1533-4c47-94a7-8d9e5114771d\") " Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.601209 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" event={"ID":"0ab881e6-b35e-44b4-adc6-5c176618f3c2","Type":"ContainerDied","Data":"a826fd524e5088945582e968fede869a031d4e9555379f9faa0cd3d52c4fbf1c"} Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.601244 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" event={"ID":"0ab881e6-b35e-44b4-adc6-5c176618f3c2","Type":"ContainerStarted","Data":"1dc13f21bb3f69338c4020b587fc008473c142546c28a795bbda1b44748b9b00"} Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.609523 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85f8651-1533-4c47-94a7-8d9e5114771d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f85f8651-1533-4c47-94a7-8d9e5114771d" (UID: "f85f8651-1533-4c47-94a7-8d9e5114771d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.610917 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85f8651-1533-4c47-94a7-8d9e5114771d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f85f8651-1533-4c47-94a7-8d9e5114771d" (UID: "f85f8651-1533-4c47-94a7-8d9e5114771d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.611242 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-p6t4d" podStartSLOduration=2.801280036 podStartE2EDuration="40.611220313s" podCreationTimestamp="2025-11-25 19:52:00 +0000 UTC" firstStartedPulling="2025-11-25 19:52:01.169578747 +0000 UTC m=+1103.085941123" lastFinishedPulling="2025-11-25 19:52:38.979519034 +0000 UTC m=+1140.895881400" observedRunningTime="2025-11-25 19:52:40.610630656 +0000 UTC m=+1142.526993012" watchObservedRunningTime="2025-11-25 19:52:40.611220313 +0000 UTC m=+1142.527582679" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.622479 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85f8651-1533-4c47-94a7-8d9e5114771d-kube-api-access-pj2bp" (OuterVolumeSpecName: "kube-api-access-pj2bp") pod "f85f8651-1533-4c47-94a7-8d9e5114771d" (UID: "f85f8651-1533-4c47-94a7-8d9e5114771d"). InnerVolumeSpecName "kube-api-access-pj2bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.624820 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-scripts" (OuterVolumeSpecName: "scripts") pod "f85f8651-1533-4c47-94a7-8d9e5114771d" (UID: "f85f8651-1533-4c47-94a7-8d9e5114771d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.632855 4775 scope.go:117] "RemoveContainer" containerID="39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.671701 4775 scope.go:117] "RemoveContainer" containerID="c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.675322 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f85f8651-1533-4c47-94a7-8d9e5114771d" (UID: "f85f8651-1533-4c47-94a7-8d9e5114771d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.676326 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f85f8651-1533-4c47-94a7-8d9e5114771d" (UID: "f85f8651-1533-4c47-94a7-8d9e5114771d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.701766 4775 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.701801 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.701812 4775 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85f8651-1533-4c47-94a7-8d9e5114771d-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.701822 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.701834 4775 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85f8651-1533-4c47-94a7-8d9e5114771d-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.701871 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj2bp\" (UniqueName: \"kubernetes.io/projected/f85f8651-1533-4c47-94a7-8d9e5114771d-kube-api-access-pj2bp\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.723459 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-config-data" (OuterVolumeSpecName: "config-data") pod "f85f8651-1533-4c47-94a7-8d9e5114771d" (UID: "f85f8651-1533-4c47-94a7-8d9e5114771d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.803496 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85f8651-1533-4c47-94a7-8d9e5114771d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.853487 4775 scope.go:117] "RemoveContainer" containerID="c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2" Nov 25 19:52:40 crc kubenswrapper[4775]: E1125 19:52:40.863360 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2\": container with ID starting with c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2 not found: ID does not exist" containerID="c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.863415 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2"} err="failed to get container status \"c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2\": rpc error: code = NotFound desc = could not find container \"c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2\": container with ID starting with c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2 not found: ID does not exist" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.863440 4775 scope.go:117] "RemoveContainer" containerID="39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6" Nov 25 19:52:40 crc kubenswrapper[4775]: E1125 19:52:40.864058 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6\": container with ID starting with 39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6 not found: ID does not exist" containerID="39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.864076 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6"} err="failed to get container status \"39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6\": rpc error: code = NotFound desc = could not find container \"39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6\": container with ID starting with 39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6 not found: ID does not exist" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.864090 4775 scope.go:117] "RemoveContainer" containerID="c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052" Nov 25 19:52:40 crc kubenswrapper[4775]: E1125 19:52:40.866372 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052\": container with ID starting with c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052 not found: ID does not exist" containerID="c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.866395 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052"} err="failed to get container status \"c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052\": rpc error: code = NotFound desc = could not find container \"c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052\": container with ID starting with c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052 not found: ID does not exist" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.866410 4775 scope.go:117] "RemoveContainer" containerID="c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.866699 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2"} err="failed to get container status \"c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2\": rpc error: code = NotFound desc = could not find container \"c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2\": container with ID starting with c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2 not found: ID does not exist" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.866715 4775 scope.go:117] "RemoveContainer" containerID="39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.867590 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6"} err="failed to get container status \"39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6\": rpc error: code = NotFound desc = could not find container \"39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6\": container with ID starting with 39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6 not found: ID does not exist" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.867612 4775 scope.go:117] "RemoveContainer" containerID="c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.867931 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052"} err="failed to get container status \"c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052\": rpc error: code = NotFound desc = could not find container \"c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052\": container with ID starting with c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052 not found: ID does not exist" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.867949 4775 scope.go:117] "RemoveContainer" containerID="c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.869039 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2"} err="failed to get container status \"c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2\": rpc error: code = NotFound desc = could not find container \"c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2\": container with ID starting with c2503f8b52e43dada8f9a4d0e8253eda92bc83a0e9274e3996332492a2e335f2 not found: ID does not exist" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.869061 4775 scope.go:117] "RemoveContainer" containerID="39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.869405 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6"} err="failed to get container status \"39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6\": rpc error: code = NotFound desc = could not find container \"39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6\": container with ID starting with 39df150af95808e41ed58c909bfd1c56c24288a9b4eddf274d0db0962c5e8ff6 not found: ID does not exist" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.869418 4775 scope.go:117] "RemoveContainer" containerID="c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.869842 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052"} err="failed to get container status \"c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052\": rpc error: code = NotFound desc = could not find container \"c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052\": container with ID starting with c3fa3a66bdf5efc02df718d2dc5c956bb6767deca8b1ae534c724418f079c052 not found: ID does not exist" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.933773 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1daa3845-9ffe-44a9-b131-7229a4fd2b3e" path="/var/lib/kubelet/pods/1daa3845-9ffe-44a9-b131-7229a4fd2b3e/volumes" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.934458 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7444d7c94-bsxlv"] Nov 25 19:52:40 crc kubenswrapper[4775]: E1125 19:52:40.934715 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerName="sg-core" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.934725 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerName="sg-core" Nov 25 19:52:40 crc kubenswrapper[4775]: E1125 19:52:40.934739 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerName="ceilometer-notification-agent" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.934745 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerName="ceilometer-notification-agent" Nov 25 19:52:40 crc kubenswrapper[4775]: E1125 19:52:40.934771 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerName="proxy-httpd" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.934777 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerName="proxy-httpd" Nov 25 19:52:40 crc kubenswrapper[4775]: E1125 19:52:40.934791 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1daa3845-9ffe-44a9-b131-7229a4fd2b3e" containerName="init" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.934797 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1daa3845-9ffe-44a9-b131-7229a4fd2b3e" containerName="init" Nov 25 19:52:40 crc kubenswrapper[4775]: E1125 19:52:40.934804 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1daa3845-9ffe-44a9-b131-7229a4fd2b3e" containerName="dnsmasq-dns" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.934810 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1daa3845-9ffe-44a9-b131-7229a4fd2b3e" containerName="dnsmasq-dns" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.934985 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1daa3845-9ffe-44a9-b131-7229a4fd2b3e" containerName="dnsmasq-dns" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.935002 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerName="sg-core" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.935021 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerName="ceilometer-notification-agent" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.935032 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" containerName="proxy-httpd" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.936221 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7444d7c94-bsxlv"] Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.936250 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.936320 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.939937 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.940089 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 19:52:40 crc kubenswrapper[4775]: I1125 19:52:40.967943 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.001235 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.004328 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.006050 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.009481 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-internal-tls-certs\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.009536 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sp8d\" (UniqueName: \"kubernetes.io/projected/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-kube-api-access-7sp8d\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.009620 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-public-tls-certs\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.009640 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-combined-ca-bundle\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.009690 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-logs\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.009713 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-config-data\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.009735 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-config-data-custom\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.011118 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.011135 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.071083 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.071422 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.111300 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nflgl\" (UniqueName: \"kubernetes.io/projected/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-kube-api-access-nflgl\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.111368 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-internal-tls-certs\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.111388 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.111444 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.111561 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sp8d\" (UniqueName: \"kubernetes.io/projected/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-kube-api-access-7sp8d\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.111604 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-run-httpd\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.111700 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-log-httpd\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.112043 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-public-tls-certs\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.112109 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-combined-ca-bundle\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.112132 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-scripts\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.113462 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-logs\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.113507 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-config-data\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.113557 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-config-data\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.113616 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-config-data-custom\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.113957 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-logs\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.116317 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-internal-tls-certs\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.117737 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-public-tls-certs\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.117982 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-config-data-custom\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.118963 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-config-data\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.119162 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-combined-ca-bundle\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.127570 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sp8d\" (UniqueName: \"kubernetes.io/projected/8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade-kube-api-access-7sp8d\") pod \"barbican-api-7444d7c94-bsxlv\" (UID: \"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade\") " pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.215297 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-config-data\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.215386 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nflgl\" (UniqueName: \"kubernetes.io/projected/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-kube-api-access-nflgl\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.215421 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.215442 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.215473 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-run-httpd\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.215565 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-log-httpd\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.215613 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-scripts\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.216246 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-run-httpd\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.216361 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-log-httpd\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.220490 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-config-data\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.222312 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.223081 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.224499 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-scripts\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.233683 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nflgl\" (UniqueName: \"kubernetes.io/projected/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-kube-api-access-nflgl\") pod \"ceilometer-0\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.262331 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.330527 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.616485 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" event={"ID":"0ab881e6-b35e-44b4-adc6-5c176618f3c2","Type":"ContainerStarted","Data":"425f21563175e041a3d11e53d7cbc64ee6f8aeeb4f9e10064783c1aa729bc827"} Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.616553 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:41 crc kubenswrapper[4775]: I1125 19:52:41.641324 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" podStartSLOduration=3.641306025 podStartE2EDuration="3.641306025s" podCreationTimestamp="2025-11-25 19:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:41.631326116 +0000 UTC m=+1143.547688502" watchObservedRunningTime="2025-11-25 19:52:41.641306025 +0000 UTC m=+1143.557668401" Nov 25 19:52:42 crc kubenswrapper[4775]: I1125 19:52:42.191098 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:52:42 crc kubenswrapper[4775]: W1125 19:52:42.198663 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf7c3214_aa6b_4fd4_94fa_46e06cb2fd4a.slice/crio-5998b4b869f29df97f53a80f8e975c6e17885f4021589704c6d5e1fc1a6b3828 WatchSource:0}: Error finding container 5998b4b869f29df97f53a80f8e975c6e17885f4021589704c6d5e1fc1a6b3828: Status 404 returned error can't find the container with id 5998b4b869f29df97f53a80f8e975c6e17885f4021589704c6d5e1fc1a6b3828 Nov 25 19:52:42 crc kubenswrapper[4775]: I1125 19:52:42.262355 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7444d7c94-bsxlv"] Nov 25 19:52:42 crc kubenswrapper[4775]: W1125 19:52:42.272161 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ba477cb_3f3e_41e2_9ca3_fe3c94fbdade.slice/crio-7117545492d8b4b4ecaaa7118bfe49929a99c708ccfb27addbd76222ec2abaa3 WatchSource:0}: Error finding container 7117545492d8b4b4ecaaa7118bfe49929a99c708ccfb27addbd76222ec2abaa3: Status 404 returned error can't find the container with id 7117545492d8b4b4ecaaa7118bfe49929a99c708ccfb27addbd76222ec2abaa3 Nov 25 19:52:42 crc kubenswrapper[4775]: I1125 19:52:42.634753 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a","Type":"ContainerStarted","Data":"5998b4b869f29df97f53a80f8e975c6e17885f4021589704c6d5e1fc1a6b3828"} Nov 25 19:52:42 crc kubenswrapper[4775]: I1125 19:52:42.636538 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7444d7c94-bsxlv" event={"ID":"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade","Type":"ContainerStarted","Data":"4497fc9cada476cda3b7889fa4635a64d0bae5cc7aaead966280785b887864e1"} Nov 25 19:52:42 crc kubenswrapper[4775]: I1125 19:52:42.636561 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7444d7c94-bsxlv" event={"ID":"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade","Type":"ContainerStarted","Data":"7117545492d8b4b4ecaaa7118bfe49929a99c708ccfb27addbd76222ec2abaa3"} Nov 25 19:52:42 crc kubenswrapper[4775]: I1125 19:52:42.638160 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54587fd766-l4dln" event={"ID":"f4832609-d922-4c24-9b69-a9fbd2de6c86","Type":"ContainerStarted","Data":"20d319c8dacd8ffb8f493db07abdea0c379cba2752c00568d8abd8b8576ee97d"} Nov 25 19:52:42 crc kubenswrapper[4775]: I1125 19:52:42.638189 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54587fd766-l4dln" event={"ID":"f4832609-d922-4c24-9b69-a9fbd2de6c86","Type":"ContainerStarted","Data":"e911381dee7733c2707dd8258f56ed82a05ab377c74f6ce1b2a82c0c55ee766b"} Nov 25 19:52:42 crc kubenswrapper[4775]: I1125 19:52:42.642525 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bcd58f99f-7glc6" event={"ID":"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8","Type":"ContainerStarted","Data":"e5fd3ac6c4be118386dc0fd242cfe3481d284d7124b79666567110ff300a2ffd"} Nov 25 19:52:42 crc kubenswrapper[4775]: I1125 19:52:42.642588 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bcd58f99f-7glc6" event={"ID":"520e459e-76e8-4e4b-8e81-3eacb6bfe1c8","Type":"ContainerStarted","Data":"9eaadc2efe4160b28bc4fcd4ce72ddcfbc2f5be498fac531af6a75329fcf9016"} Nov 25 19:52:42 crc kubenswrapper[4775]: I1125 19:52:42.665744 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-54587fd766-l4dln" podStartSLOduration=2.630333825 podStartE2EDuration="4.665720595s" podCreationTimestamp="2025-11-25 19:52:38 +0000 UTC" firstStartedPulling="2025-11-25 19:52:39.736494982 +0000 UTC m=+1141.652857348" lastFinishedPulling="2025-11-25 19:52:41.771881752 +0000 UTC m=+1143.688244118" observedRunningTime="2025-11-25 19:52:42.65476899 +0000 UTC m=+1144.571131366" watchObservedRunningTime="2025-11-25 19:52:42.665720595 +0000 UTC m=+1144.582082971" Nov 25 19:52:42 crc kubenswrapper[4775]: I1125 19:52:42.865795 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85f8651-1533-4c47-94a7-8d9e5114771d" path="/var/lib/kubelet/pods/f85f8651-1533-4c47-94a7-8d9e5114771d/volumes" Nov 25 19:52:43 crc kubenswrapper[4775]: I1125 19:52:43.661924 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a","Type":"ContainerStarted","Data":"f09082f9852506fa895d7821b1c13f8074f430a1d4ea1d577368699559e9f0e7"} Nov 25 19:52:43 crc kubenswrapper[4775]: I1125 19:52:43.669999 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7444d7c94-bsxlv" event={"ID":"8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade","Type":"ContainerStarted","Data":"a10735870f419baefb86645d0244fe1c7255995cbac7bfd807e6335a0395fcf0"} Nov 25 19:52:43 crc kubenswrapper[4775]: I1125 19:52:43.671223 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:43 crc kubenswrapper[4775]: I1125 19:52:43.671271 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:43 crc kubenswrapper[4775]: I1125 19:52:43.700288 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5bcd58f99f-7glc6" podStartSLOduration=3.482329536 podStartE2EDuration="5.700264267s" podCreationTimestamp="2025-11-25 19:52:38 +0000 UTC" firstStartedPulling="2025-11-25 19:52:39.554622458 +0000 UTC m=+1141.470984824" lastFinishedPulling="2025-11-25 19:52:41.772557189 +0000 UTC m=+1143.688919555" observedRunningTime="2025-11-25 19:52:42.695716371 +0000 UTC m=+1144.612078737" watchObservedRunningTime="2025-11-25 19:52:43.700264267 +0000 UTC m=+1145.616626643" Nov 25 19:52:43 crc kubenswrapper[4775]: I1125 19:52:43.702068 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7444d7c94-bsxlv" podStartSLOduration=3.702055965 podStartE2EDuration="3.702055965s" podCreationTimestamp="2025-11-25 19:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:43.697133733 +0000 UTC m=+1145.613496099" watchObservedRunningTime="2025-11-25 19:52:43.702055965 +0000 UTC m=+1145.618418351" Nov 25 19:52:44 crc kubenswrapper[4775]: I1125 19:52:44.676702 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a","Type":"ContainerStarted","Data":"cff4db24649b95c244f1504685a3980b70c042bd7cb20920137427b889487cad"} Nov 25 19:52:44 crc kubenswrapper[4775]: I1125 19:52:44.676964 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a","Type":"ContainerStarted","Data":"de06255136ed863cfb2a96032f0ea0582ab0ef4f2b530de9a39469cb597f9fe4"} Nov 25 19:52:44 crc kubenswrapper[4775]: I1125 19:52:44.678684 4775 generic.go:334] "Generic (PLEG): container finished" podID="bc8913d5-5107-422d-8554-1d8c951253fd" containerID="1110b5a72bf12a9b230e59dcd5b13a3af717d43db6dea5a764d2ef66c4ae7762" exitCode=0 Nov 25 19:52:44 crc kubenswrapper[4775]: I1125 19:52:44.678726 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p6t4d" event={"ID":"bc8913d5-5107-422d-8554-1d8c951253fd","Type":"ContainerDied","Data":"1110b5a72bf12a9b230e59dcd5b13a3af717d43db6dea5a764d2ef66c4ae7762"} Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.070196 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.123521 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-db-sync-config-data\") pod \"bc8913d5-5107-422d-8554-1d8c951253fd\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.123573 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-combined-ca-bundle\") pod \"bc8913d5-5107-422d-8554-1d8c951253fd\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.123596 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpk6x\" (UniqueName: \"kubernetes.io/projected/bc8913d5-5107-422d-8554-1d8c951253fd-kube-api-access-gpk6x\") pod \"bc8913d5-5107-422d-8554-1d8c951253fd\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.123676 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-scripts\") pod \"bc8913d5-5107-422d-8554-1d8c951253fd\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.123722 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-config-data\") pod \"bc8913d5-5107-422d-8554-1d8c951253fd\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.123746 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc8913d5-5107-422d-8554-1d8c951253fd-etc-machine-id\") pod \"bc8913d5-5107-422d-8554-1d8c951253fd\" (UID: \"bc8913d5-5107-422d-8554-1d8c951253fd\") " Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.124101 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc8913d5-5107-422d-8554-1d8c951253fd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bc8913d5-5107-422d-8554-1d8c951253fd" (UID: "bc8913d5-5107-422d-8554-1d8c951253fd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.129709 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-scripts" (OuterVolumeSpecName: "scripts") pod "bc8913d5-5107-422d-8554-1d8c951253fd" (UID: "bc8913d5-5107-422d-8554-1d8c951253fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.129944 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8913d5-5107-422d-8554-1d8c951253fd-kube-api-access-gpk6x" (OuterVolumeSpecName: "kube-api-access-gpk6x") pod "bc8913d5-5107-422d-8554-1d8c951253fd" (UID: "bc8913d5-5107-422d-8554-1d8c951253fd"). InnerVolumeSpecName "kube-api-access-gpk6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.130603 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bc8913d5-5107-422d-8554-1d8c951253fd" (UID: "bc8913d5-5107-422d-8554-1d8c951253fd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.152553 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc8913d5-5107-422d-8554-1d8c951253fd" (UID: "bc8913d5-5107-422d-8554-1d8c951253fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.176705 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-config-data" (OuterVolumeSpecName: "config-data") pod "bc8913d5-5107-422d-8554-1d8c951253fd" (UID: "bc8913d5-5107-422d-8554-1d8c951253fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.225903 4775 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.225936 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.225945 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpk6x\" (UniqueName: \"kubernetes.io/projected/bc8913d5-5107-422d-8554-1d8c951253fd-kube-api-access-gpk6x\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.225958 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.225965 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8913d5-5107-422d-8554-1d8c951253fd-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.225974 4775 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc8913d5-5107-422d-8554-1d8c951253fd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.720985 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a","Type":"ContainerStarted","Data":"8e7d892a79e29fb2849aab43bfa7240de67629f42a8dc06bf021e25328fd5127"} Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.721565 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.734876 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p6t4d" event={"ID":"bc8913d5-5107-422d-8554-1d8c951253fd","Type":"ContainerDied","Data":"63e9bce2424c0317fcc51a84d6a835d7168072ab4ffc3bef9eda8b8567cc989c"} Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.734950 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63e9bce2424c0317fcc51a84d6a835d7168072ab4ffc3bef9eda8b8567cc989c" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.735091 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p6t4d" Nov 25 19:52:46 crc kubenswrapper[4775]: I1125 19:52:46.774522 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.560461823 podStartE2EDuration="6.774487294s" podCreationTimestamp="2025-11-25 19:52:40 +0000 UTC" firstStartedPulling="2025-11-25 19:52:42.2015466 +0000 UTC m=+1144.117908976" lastFinishedPulling="2025-11-25 19:52:45.415572081 +0000 UTC m=+1147.331934447" observedRunningTime="2025-11-25 19:52:46.766342585 +0000 UTC m=+1148.682704961" watchObservedRunningTime="2025-11-25 19:52:46.774487294 +0000 UTC m=+1148.690849740" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.068974 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 19:52:47 crc kubenswrapper[4775]: E1125 19:52:47.072144 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc8913d5-5107-422d-8554-1d8c951253fd" containerName="cinder-db-sync" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.072845 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8913d5-5107-422d-8554-1d8c951253fd" containerName="cinder-db-sync" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.073060 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc8913d5-5107-422d-8554-1d8c951253fd" containerName="cinder-db-sync" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.073984 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.077378 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pp5wj" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.077423 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.077510 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.082456 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.095873 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.132062 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-rbqq6"] Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.132441 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" podUID="0ab881e6-b35e-44b4-adc6-5c176618f3c2" containerName="dnsmasq-dns" containerID="cri-o://425f21563175e041a3d11e53d7cbc64ee6f8aeeb4f9e10064783c1aa729bc827" gracePeriod=10 Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.133829 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.150562 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f46jk\" (UniqueName: \"kubernetes.io/projected/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-kube-api-access-f46jk\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.150617 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.150656 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.150704 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.150724 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.150756 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.171736 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-bn57q"] Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.183205 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.197300 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-bn57q"] Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.252496 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.252575 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.252613 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.252636 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.252667 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-config\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.252722 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mptz\" (UniqueName: \"kubernetes.io/projected/aa5215c2-c105-433b-b02f-be661535774c-kube-api-access-9mptz\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.252754 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f46jk\" (UniqueName: \"kubernetes.io/projected/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-kube-api-access-f46jk\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.252781 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.252805 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.252822 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.252847 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.256419 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.265786 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.266282 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.266863 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.269503 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.286181 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f46jk\" (UniqueName: \"kubernetes.io/projected/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-kube-api-access-f46jk\") pod \"cinder-scheduler-0\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.324549 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.340103 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.354423 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.355390 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc956353-4430-4219-b077-5fe86ba366aa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.355418 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-config-data-custom\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.355444 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc956353-4430-4219-b077-5fe86ba366aa-logs\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.355493 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mptz\" (UniqueName: \"kubernetes.io/projected/aa5215c2-c105-433b-b02f-be661535774c-kube-api-access-9mptz\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.355524 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.355558 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-scripts\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.355596 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh8tz\" (UniqueName: \"kubernetes.io/projected/bc956353-4430-4219-b077-5fe86ba366aa-kube-api-access-lh8tz\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.355623 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.355665 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.355695 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-config-data\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.355735 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.355749 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-config\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.356579 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-config\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.357331 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.358970 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.373119 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.376085 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.408843 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.433903 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mptz\" (UniqueName: \"kubernetes.io/projected/aa5215c2-c105-433b-b02f-be661535774c-kube-api-access-9mptz\") pod \"dnsmasq-dns-6d97fcdd8f-bn57q\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.459532 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.459825 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-scripts\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.459906 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh8tz\" (UniqueName: \"kubernetes.io/projected/bc956353-4430-4219-b077-5fe86ba366aa-kube-api-access-lh8tz\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.460066 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-config-data\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.460199 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-config-data-custom\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.460264 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc956353-4430-4219-b077-5fe86ba366aa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.460426 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc956353-4430-4219-b077-5fe86ba366aa-logs\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.460857 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc956353-4430-4219-b077-5fe86ba366aa-logs\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.467306 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc956353-4430-4219-b077-5fe86ba366aa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.468015 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.474690 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-config-data\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.482160 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-scripts\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.497921 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-config-data-custom\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.500367 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh8tz\" (UniqueName: \"kubernetes.io/projected/bc956353-4430-4219-b077-5fe86ba366aa-kube-api-access-lh8tz\") pod \"cinder-api-0\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.507027 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.735753 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.765006 4775 generic.go:334] "Generic (PLEG): container finished" podID="0ab881e6-b35e-44b4-adc6-5c176618f3c2" containerID="425f21563175e041a3d11e53d7cbc64ee6f8aeeb4f9e10064783c1aa729bc827" exitCode=0 Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.765085 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" event={"ID":"0ab881e6-b35e-44b4-adc6-5c176618f3c2","Type":"ContainerDied","Data":"425f21563175e041a3d11e53d7cbc64ee6f8aeeb4f9e10064783c1aa729bc827"} Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.768283 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.865908 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjwpk\" (UniqueName: \"kubernetes.io/projected/0ab881e6-b35e-44b4-adc6-5c176618f3c2-kube-api-access-sjwpk\") pod \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.866037 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-ovsdbserver-nb\") pod \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.866089 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-config\") pod \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.866119 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-dns-svc\") pod \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.866171 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-ovsdbserver-sb\") pod \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\" (UID: \"0ab881e6-b35e-44b4-adc6-5c176618f3c2\") " Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.874807 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab881e6-b35e-44b4-adc6-5c176618f3c2-kube-api-access-sjwpk" (OuterVolumeSpecName: "kube-api-access-sjwpk") pod "0ab881e6-b35e-44b4-adc6-5c176618f3c2" (UID: "0ab881e6-b35e-44b4-adc6-5c176618f3c2"). InnerVolumeSpecName "kube-api-access-sjwpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.947299 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ab881e6-b35e-44b4-adc6-5c176618f3c2" (UID: "0ab881e6-b35e-44b4-adc6-5c176618f3c2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.950489 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ab881e6-b35e-44b4-adc6-5c176618f3c2" (UID: "0ab881e6-b35e-44b4-adc6-5c176618f3c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.953540 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ab881e6-b35e-44b4-adc6-5c176618f3c2" (UID: "0ab881e6-b35e-44b4-adc6-5c176618f3c2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.966205 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-config" (OuterVolumeSpecName: "config") pod "0ab881e6-b35e-44b4-adc6-5c176618f3c2" (UID: "0ab881e6-b35e-44b4-adc6-5c176618f3c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.967572 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjwpk\" (UniqueName: \"kubernetes.io/projected/0ab881e6-b35e-44b4-adc6-5c176618f3c2-kube-api-access-sjwpk\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.967592 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.967603 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.967614 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:47 crc kubenswrapper[4775]: I1125 19:52:47.967622 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab881e6-b35e-44b4-adc6-5c176618f3c2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:48 crc kubenswrapper[4775]: I1125 19:52:48.096199 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-bn57q"] Nov 25 19:52:48 crc kubenswrapper[4775]: I1125 19:52:48.214311 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 19:52:48 crc kubenswrapper[4775]: W1125 19:52:48.226851 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5039a8a_4a7f_468b_8cc5_c8eb1a02f2d4.slice/crio-065fc1b5797daf455a3cdfa43364cc86bfb5828f1d6406c743c4420544e27210 WatchSource:0}: Error finding container 065fc1b5797daf455a3cdfa43364cc86bfb5828f1d6406c743c4420544e27210: Status 404 returned error can't find the container with id 065fc1b5797daf455a3cdfa43364cc86bfb5828f1d6406c743c4420544e27210 Nov 25 19:52:48 crc kubenswrapper[4775]: I1125 19:52:48.301052 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 19:52:48 crc kubenswrapper[4775]: I1125 19:52:48.811131 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4","Type":"ContainerStarted","Data":"065fc1b5797daf455a3cdfa43364cc86bfb5828f1d6406c743c4420544e27210"} Nov 25 19:52:48 crc kubenswrapper[4775]: I1125 19:52:48.830219 4775 generic.go:334] "Generic (PLEG): container finished" podID="aa5215c2-c105-433b-b02f-be661535774c" containerID="c5c0e9b1976351be2cd7bda5d158ba864d20867fe036b05ce105f84fe89b2237" exitCode=0 Nov 25 19:52:48 crc kubenswrapper[4775]: I1125 19:52:48.830287 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" event={"ID":"aa5215c2-c105-433b-b02f-be661535774c","Type":"ContainerDied","Data":"c5c0e9b1976351be2cd7bda5d158ba864d20867fe036b05ce105f84fe89b2237"} Nov 25 19:52:48 crc kubenswrapper[4775]: I1125 19:52:48.830317 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" event={"ID":"aa5215c2-c105-433b-b02f-be661535774c","Type":"ContainerStarted","Data":"7b2f7db44cde0cb44b0826983e28dbcd045c081b1dfb012f52b8296cec28cab9"} Nov 25 19:52:48 crc kubenswrapper[4775]: I1125 19:52:48.844409 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bc956353-4430-4219-b077-5fe86ba366aa","Type":"ContainerStarted","Data":"af51d07fc67f5297b82e3b4e17ac16e7a0d73931a6cdca3fdccfa045664d7ed5"} Nov 25 19:52:48 crc kubenswrapper[4775]: I1125 19:52:48.887622 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" event={"ID":"0ab881e6-b35e-44b4-adc6-5c176618f3c2","Type":"ContainerDied","Data":"1dc13f21bb3f69338c4020b587fc008473c142546c28a795bbda1b44748b9b00"} Nov 25 19:52:48 crc kubenswrapper[4775]: I1125 19:52:48.887703 4775 scope.go:117] "RemoveContainer" containerID="425f21563175e041a3d11e53d7cbc64ee6f8aeeb4f9e10064783c1aa729bc827" Nov 25 19:52:48 crc kubenswrapper[4775]: I1125 19:52:48.887852 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-rbqq6" Nov 25 19:52:48 crc kubenswrapper[4775]: I1125 19:52:48.977136 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-rbqq6"] Nov 25 19:52:48 crc kubenswrapper[4775]: I1125 19:52:48.984972 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-rbqq6"] Nov 25 19:52:49 crc kubenswrapper[4775]: I1125 19:52:49.045798 4775 scope.go:117] "RemoveContainer" containerID="a826fd524e5088945582e968fede869a031d4e9555379f9faa0cd3d52c4fbf1c" Nov 25 19:52:49 crc kubenswrapper[4775]: I1125 19:52:49.251282 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 19:52:49 crc kubenswrapper[4775]: I1125 19:52:49.894390 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bc956353-4430-4219-b077-5fe86ba366aa","Type":"ContainerStarted","Data":"aa3cc2cb9ba62baf7300c92a06148d29e9a6c4350753a369b9438ce88de64a33"} Nov 25 19:52:49 crc kubenswrapper[4775]: I1125 19:52:49.894659 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bc956353-4430-4219-b077-5fe86ba366aa","Type":"ContainerStarted","Data":"fc4c91b583bcec9869284700d28ba57b39e4296a6370ff0a1494e10d57030d48"} Nov 25 19:52:49 crc kubenswrapper[4775]: I1125 19:52:49.894821 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="bc956353-4430-4219-b077-5fe86ba366aa" containerName="cinder-api-log" containerID="cri-o://fc4c91b583bcec9869284700d28ba57b39e4296a6370ff0a1494e10d57030d48" gracePeriod=30 Nov 25 19:52:49 crc kubenswrapper[4775]: I1125 19:52:49.894938 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 19:52:49 crc kubenswrapper[4775]: I1125 19:52:49.895062 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="bc956353-4430-4219-b077-5fe86ba366aa" containerName="cinder-api" containerID="cri-o://aa3cc2cb9ba62baf7300c92a06148d29e9a6c4350753a369b9438ce88de64a33" gracePeriod=30 Nov 25 19:52:49 crc kubenswrapper[4775]: I1125 19:52:49.904680 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4","Type":"ContainerStarted","Data":"9dbcbaf223b4a38af03b36bd75aba234f0e7d955d3d4a176fdf0d97e61628390"} Nov 25 19:52:49 crc kubenswrapper[4775]: I1125 19:52:49.907187 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" event={"ID":"aa5215c2-c105-433b-b02f-be661535774c","Type":"ContainerStarted","Data":"da719608c5cae4286af129bf9ee801e3164136bbc2dfc2dda1ee42be51a19dcd"} Nov 25 19:52:49 crc kubenswrapper[4775]: I1125 19:52:49.907580 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:49 crc kubenswrapper[4775]: I1125 19:52:49.931248 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.931229226 podStartE2EDuration="2.931229226s" podCreationTimestamp="2025-11-25 19:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:49.917285101 +0000 UTC m=+1151.833647487" watchObservedRunningTime="2025-11-25 19:52:49.931229226 +0000 UTC m=+1151.847591592" Nov 25 19:52:49 crc kubenswrapper[4775]: I1125 19:52:49.941007 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" podStartSLOduration=2.940990918 podStartE2EDuration="2.940990918s" podCreationTimestamp="2025-11-25 19:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:52:49.934524465 +0000 UTC m=+1151.850886831" watchObservedRunningTime="2025-11-25 19:52:49.940990918 +0000 UTC m=+1151.857353284" Nov 25 19:52:50 crc kubenswrapper[4775]: I1125 19:52:50.316423 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:50 crc kubenswrapper[4775]: I1125 19:52:50.467334 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:50 crc kubenswrapper[4775]: I1125 19:52:50.858568 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab881e6-b35e-44b4-adc6-5c176618f3c2" path="/var/lib/kubelet/pods/0ab881e6-b35e-44b4-adc6-5c176618f3c2/volumes" Nov 25 19:52:50 crc kubenswrapper[4775]: I1125 19:52:50.921029 4775 generic.go:334] "Generic (PLEG): container finished" podID="bc956353-4430-4219-b077-5fe86ba366aa" containerID="fc4c91b583bcec9869284700d28ba57b39e4296a6370ff0a1494e10d57030d48" exitCode=143 Nov 25 19:52:50 crc kubenswrapper[4775]: I1125 19:52:50.921097 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bc956353-4430-4219-b077-5fe86ba366aa","Type":"ContainerDied","Data":"fc4c91b583bcec9869284700d28ba57b39e4296a6370ff0a1494e10d57030d48"} Nov 25 19:52:50 crc kubenswrapper[4775]: I1125 19:52:50.924346 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4","Type":"ContainerStarted","Data":"7d9236393d5d7a97a212e532ac549f5666ebc5728a68a6bbc2f8dba010ac271a"} Nov 25 19:52:50 crc kubenswrapper[4775]: I1125 19:52:50.953375 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.034014116 podStartE2EDuration="3.953348004s" podCreationTimestamp="2025-11-25 19:52:47 +0000 UTC" firstStartedPulling="2025-11-25 19:52:48.229701933 +0000 UTC m=+1150.146064299" lastFinishedPulling="2025-11-25 19:52:49.149035821 +0000 UTC m=+1151.065398187" observedRunningTime="2025-11-25 19:52:50.946769018 +0000 UTC m=+1152.863131384" watchObservedRunningTime="2025-11-25 19:52:50.953348004 +0000 UTC m=+1152.869710400" Nov 25 19:52:52 crc kubenswrapper[4775]: I1125 19:52:52.410075 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 19:52:52 crc kubenswrapper[4775]: I1125 19:52:52.589336 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:52 crc kubenswrapper[4775]: I1125 19:52:52.609446 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7444d7c94-bsxlv" Nov 25 19:52:52 crc kubenswrapper[4775]: I1125 19:52:52.710398 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9749f886-zqznl"] Nov 25 19:52:52 crc kubenswrapper[4775]: I1125 19:52:52.714780 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9749f886-zqznl" podUID="4a06ed2e-ff90-4b5c-92b1-3e102255820d" containerName="barbican-api" containerID="cri-o://98930cebda0f7476d2285a843cfabe5cfbfa227b0e3242fee8befb0381e48546" gracePeriod=30 Nov 25 19:52:52 crc kubenswrapper[4775]: I1125 19:52:52.715039 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9749f886-zqznl" podUID="4a06ed2e-ff90-4b5c-92b1-3e102255820d" containerName="barbican-api-log" containerID="cri-o://a8e3b6a3737e72607ddcf645fee25a258efb376d46d2b5056e6d6ea1de91beea" gracePeriod=30 Nov 25 19:52:52 crc kubenswrapper[4775]: I1125 19:52:52.945077 4775 generic.go:334] "Generic (PLEG): container finished" podID="4a06ed2e-ff90-4b5c-92b1-3e102255820d" containerID="a8e3b6a3737e72607ddcf645fee25a258efb376d46d2b5056e6d6ea1de91beea" exitCode=143 Nov 25 19:52:52 crc kubenswrapper[4775]: I1125 19:52:52.945149 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9749f886-zqznl" event={"ID":"4a06ed2e-ff90-4b5c-92b1-3e102255820d","Type":"ContainerDied","Data":"a8e3b6a3737e72607ddcf645fee25a258efb376d46d2b5056e6d6ea1de91beea"} Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.134907 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9749f886-zqznl" podUID="4a06ed2e-ff90-4b5c-92b1-3e102255820d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": read tcp 10.217.0.2:36628->10.217.0.144:9311: read: connection reset by peer" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.134974 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9749f886-zqznl" podUID="4a06ed2e-ff90-4b5c-92b1-3e102255820d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": read tcp 10.217.0.2:36624->10.217.0.144:9311: read: connection reset by peer" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.718290 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.858371 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8fwm\" (UniqueName: \"kubernetes.io/projected/4a06ed2e-ff90-4b5c-92b1-3e102255820d-kube-api-access-c8fwm\") pod \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.858503 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-config-data-custom\") pod \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.858592 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-combined-ca-bundle\") pod \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.858626 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-config-data\") pod \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.858704 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a06ed2e-ff90-4b5c-92b1-3e102255820d-logs\") pod \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\" (UID: \"4a06ed2e-ff90-4b5c-92b1-3e102255820d\") " Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.859399 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a06ed2e-ff90-4b5c-92b1-3e102255820d-logs" (OuterVolumeSpecName: "logs") pod "4a06ed2e-ff90-4b5c-92b1-3e102255820d" (UID: "4a06ed2e-ff90-4b5c-92b1-3e102255820d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.864467 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4a06ed2e-ff90-4b5c-92b1-3e102255820d" (UID: "4a06ed2e-ff90-4b5c-92b1-3e102255820d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.870813 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a06ed2e-ff90-4b5c-92b1-3e102255820d-kube-api-access-c8fwm" (OuterVolumeSpecName: "kube-api-access-c8fwm") pod "4a06ed2e-ff90-4b5c-92b1-3e102255820d" (UID: "4a06ed2e-ff90-4b5c-92b1-3e102255820d"). InnerVolumeSpecName "kube-api-access-c8fwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.884724 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a06ed2e-ff90-4b5c-92b1-3e102255820d" (UID: "4a06ed2e-ff90-4b5c-92b1-3e102255820d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.909021 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-config-data" (OuterVolumeSpecName: "config-data") pod "4a06ed2e-ff90-4b5c-92b1-3e102255820d" (UID: "4a06ed2e-ff90-4b5c-92b1-3e102255820d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.961587 4775 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.961620 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.961634 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a06ed2e-ff90-4b5c-92b1-3e102255820d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.961663 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a06ed2e-ff90-4b5c-92b1-3e102255820d-logs\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.961677 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8fwm\" (UniqueName: \"kubernetes.io/projected/4a06ed2e-ff90-4b5c-92b1-3e102255820d-kube-api-access-c8fwm\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.992127 4775 generic.go:334] "Generic (PLEG): container finished" podID="4a06ed2e-ff90-4b5c-92b1-3e102255820d" containerID="98930cebda0f7476d2285a843cfabe5cfbfa227b0e3242fee8befb0381e48546" exitCode=0 Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.992204 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9749f886-zqznl" Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.992193 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9749f886-zqznl" event={"ID":"4a06ed2e-ff90-4b5c-92b1-3e102255820d","Type":"ContainerDied","Data":"98930cebda0f7476d2285a843cfabe5cfbfa227b0e3242fee8befb0381e48546"} Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.992600 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9749f886-zqznl" event={"ID":"4a06ed2e-ff90-4b5c-92b1-3e102255820d","Type":"ContainerDied","Data":"e8237beda97430b9eed191b77bfba4939615f4b3d18b78b6093dd15ed090986a"} Nov 25 19:52:56 crc kubenswrapper[4775]: I1125 19:52:56.992636 4775 scope.go:117] "RemoveContainer" containerID="98930cebda0f7476d2285a843cfabe5cfbfa227b0e3242fee8befb0381e48546" Nov 25 19:52:57 crc kubenswrapper[4775]: I1125 19:52:57.024877 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9749f886-zqznl"] Nov 25 19:52:57 crc kubenswrapper[4775]: I1125 19:52:57.026962 4775 scope.go:117] "RemoveContainer" containerID="a8e3b6a3737e72607ddcf645fee25a258efb376d46d2b5056e6d6ea1de91beea" Nov 25 19:52:57 crc kubenswrapper[4775]: I1125 19:52:57.035791 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-9749f886-zqznl"] Nov 25 19:52:57 crc kubenswrapper[4775]: I1125 19:52:57.057015 4775 scope.go:117] "RemoveContainer" containerID="98930cebda0f7476d2285a843cfabe5cfbfa227b0e3242fee8befb0381e48546" Nov 25 19:52:57 crc kubenswrapper[4775]: E1125 19:52:57.057421 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98930cebda0f7476d2285a843cfabe5cfbfa227b0e3242fee8befb0381e48546\": container with ID starting with 98930cebda0f7476d2285a843cfabe5cfbfa227b0e3242fee8befb0381e48546 not found: ID does not exist" containerID="98930cebda0f7476d2285a843cfabe5cfbfa227b0e3242fee8befb0381e48546" Nov 25 19:52:57 crc kubenswrapper[4775]: I1125 19:52:57.057457 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98930cebda0f7476d2285a843cfabe5cfbfa227b0e3242fee8befb0381e48546"} err="failed to get container status \"98930cebda0f7476d2285a843cfabe5cfbfa227b0e3242fee8befb0381e48546\": rpc error: code = NotFound desc = could not find container \"98930cebda0f7476d2285a843cfabe5cfbfa227b0e3242fee8befb0381e48546\": container with ID starting with 98930cebda0f7476d2285a843cfabe5cfbfa227b0e3242fee8befb0381e48546 not found: ID does not exist" Nov 25 19:52:57 crc kubenswrapper[4775]: I1125 19:52:57.057479 4775 scope.go:117] "RemoveContainer" containerID="a8e3b6a3737e72607ddcf645fee25a258efb376d46d2b5056e6d6ea1de91beea" Nov 25 19:52:57 crc kubenswrapper[4775]: E1125 19:52:57.057872 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e3b6a3737e72607ddcf645fee25a258efb376d46d2b5056e6d6ea1de91beea\": container with ID starting with a8e3b6a3737e72607ddcf645fee25a258efb376d46d2b5056e6d6ea1de91beea not found: ID does not exist" containerID="a8e3b6a3737e72607ddcf645fee25a258efb376d46d2b5056e6d6ea1de91beea" Nov 25 19:52:57 crc kubenswrapper[4775]: I1125 19:52:57.057905 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e3b6a3737e72607ddcf645fee25a258efb376d46d2b5056e6d6ea1de91beea"} err="failed to get container status \"a8e3b6a3737e72607ddcf645fee25a258efb376d46d2b5056e6d6ea1de91beea\": rpc error: code = NotFound desc = could not find container \"a8e3b6a3737e72607ddcf645fee25a258efb376d46d2b5056e6d6ea1de91beea\": container with ID starting with a8e3b6a3737e72607ddcf645fee25a258efb376d46d2b5056e6d6ea1de91beea not found: ID does not exist" Nov 25 19:52:57 crc kubenswrapper[4775]: I1125 19:52:57.508986 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:52:57 crc kubenswrapper[4775]: I1125 19:52:57.566904 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-rmzbw"] Nov 25 19:52:57 crc kubenswrapper[4775]: I1125 19:52:57.567134 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" podUID="e2523e8b-c047-426a-908a-5b91c0e764bd" containerName="dnsmasq-dns" containerID="cri-o://04018220d9b93fc6801b13ba6b2f15456e209fa7995d4e1c6bf774ca394cffda" gracePeriod=10 Nov 25 19:52:57 crc kubenswrapper[4775]: I1125 19:52:57.691239 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 19:52:57 crc kubenswrapper[4775]: I1125 19:52:57.766961 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.106905 4775 generic.go:334] "Generic (PLEG): container finished" podID="e2523e8b-c047-426a-908a-5b91c0e764bd" containerID="04018220d9b93fc6801b13ba6b2f15456e209fa7995d4e1c6bf774ca394cffda" exitCode=0 Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.107100 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" containerName="cinder-scheduler" containerID="cri-o://9dbcbaf223b4a38af03b36bd75aba234f0e7d955d3d4a176fdf0d97e61628390" gracePeriod=30 Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.107190 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" event={"ID":"e2523e8b-c047-426a-908a-5b91c0e764bd","Type":"ContainerDied","Data":"04018220d9b93fc6801b13ba6b2f15456e209fa7995d4e1c6bf774ca394cffda"} Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.107475 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" containerName="probe" containerID="cri-o://7d9236393d5d7a97a212e532ac549f5666ebc5728a68a6bbc2f8dba010ac271a" gracePeriod=30 Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.260752 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.389123 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-ovsdbserver-nb\") pod \"e2523e8b-c047-426a-908a-5b91c0e764bd\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.389206 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-dns-svc\") pod \"e2523e8b-c047-426a-908a-5b91c0e764bd\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.389345 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-ovsdbserver-sb\") pod \"e2523e8b-c047-426a-908a-5b91c0e764bd\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.389362 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt5gt\" (UniqueName: \"kubernetes.io/projected/e2523e8b-c047-426a-908a-5b91c0e764bd-kube-api-access-bt5gt\") pod \"e2523e8b-c047-426a-908a-5b91c0e764bd\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.389407 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-config\") pod \"e2523e8b-c047-426a-908a-5b91c0e764bd\" (UID: \"e2523e8b-c047-426a-908a-5b91c0e764bd\") " Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.401757 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2523e8b-c047-426a-908a-5b91c0e764bd-kube-api-access-bt5gt" (OuterVolumeSpecName: "kube-api-access-bt5gt") pod "e2523e8b-c047-426a-908a-5b91c0e764bd" (UID: "e2523e8b-c047-426a-908a-5b91c0e764bd"). InnerVolumeSpecName "kube-api-access-bt5gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.445591 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e2523e8b-c047-426a-908a-5b91c0e764bd" (UID: "e2523e8b-c047-426a-908a-5b91c0e764bd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.452776 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-config" (OuterVolumeSpecName: "config") pod "e2523e8b-c047-426a-908a-5b91c0e764bd" (UID: "e2523e8b-c047-426a-908a-5b91c0e764bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.454409 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2523e8b-c047-426a-908a-5b91c0e764bd" (UID: "e2523e8b-c047-426a-908a-5b91c0e764bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.464972 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e2523e8b-c047-426a-908a-5b91c0e764bd" (UID: "e2523e8b-c047-426a-908a-5b91c0e764bd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.491140 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.491173 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt5gt\" (UniqueName: \"kubernetes.io/projected/e2523e8b-c047-426a-908a-5b91c0e764bd-kube-api-access-bt5gt\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.491185 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.491194 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.491202 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2523e8b-c047-426a-908a-5b91c0e764bd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:52:58 crc kubenswrapper[4775]: I1125 19:52:58.864275 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a06ed2e-ff90-4b5c-92b1-3e102255820d" path="/var/lib/kubelet/pods/4a06ed2e-ff90-4b5c-92b1-3e102255820d/volumes" Nov 25 19:52:59 crc kubenswrapper[4775]: I1125 19:52:59.129696 4775 generic.go:334] "Generic (PLEG): container finished" podID="e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" containerID="7d9236393d5d7a97a212e532ac549f5666ebc5728a68a6bbc2f8dba010ac271a" exitCode=0 Nov 25 19:52:59 crc kubenswrapper[4775]: I1125 19:52:59.129791 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4","Type":"ContainerDied","Data":"7d9236393d5d7a97a212e532ac549f5666ebc5728a68a6bbc2f8dba010ac271a"} Nov 25 19:52:59 crc kubenswrapper[4775]: I1125 19:52:59.134335 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" event={"ID":"e2523e8b-c047-426a-908a-5b91c0e764bd","Type":"ContainerDied","Data":"e1f4220789a2cf9b7d00e7697e684c93780e16dac94ed4875c011939ab2ca88f"} Nov 25 19:52:59 crc kubenswrapper[4775]: I1125 19:52:59.134389 4775 scope.go:117] "RemoveContainer" containerID="04018220d9b93fc6801b13ba6b2f15456e209fa7995d4e1c6bf774ca394cffda" Nov 25 19:52:59 crc kubenswrapper[4775]: I1125 19:52:59.134528 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-rmzbw" Nov 25 19:52:59 crc kubenswrapper[4775]: I1125 19:52:59.156431 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-rmzbw"] Nov 25 19:52:59 crc kubenswrapper[4775]: I1125 19:52:59.157553 4775 scope.go:117] "RemoveContainer" containerID="38732b094030c12d59bd61604c3736dd800ea69085cf43c0bfda32b72d86cd96" Nov 25 19:52:59 crc kubenswrapper[4775]: I1125 19:52:59.162322 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-rmzbw"] Nov 25 19:52:59 crc kubenswrapper[4775]: I1125 19:52:59.793607 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 25 19:53:00 crc kubenswrapper[4775]: I1125 19:53:00.857316 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2523e8b-c047-426a-908a-5b91c0e764bd" path="/var/lib/kubelet/pods/e2523e8b-c047-426a-908a-5b91c0e764bd/volumes" Nov 25 19:53:00 crc kubenswrapper[4775]: I1125 19:53:00.897184 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.030398 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-config-data-custom\") pod \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.030494 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f46jk\" (UniqueName: \"kubernetes.io/projected/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-kube-api-access-f46jk\") pod \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.030542 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-scripts\") pod \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.030695 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-etc-machine-id\") pod \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.030749 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-config-data\") pod \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.030780 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-combined-ca-bundle\") pod \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\" (UID: \"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4\") " Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.030813 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" (UID: "e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.031232 4775 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.039148 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" (UID: "e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.042201 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-kube-api-access-f46jk" (OuterVolumeSpecName: "kube-api-access-f46jk") pod "e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" (UID: "e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4"). InnerVolumeSpecName "kube-api-access-f46jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.044519 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-scripts" (OuterVolumeSpecName: "scripts") pod "e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" (UID: "e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.110280 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" (UID: "e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.126091 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f9dd67654-p257f" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.132379 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f46jk\" (UniqueName: \"kubernetes.io/projected/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-kube-api-access-f46jk\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.132414 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.132423 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.132445 4775 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.136741 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-config-data" (OuterVolumeSpecName: "config-data") pod "e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" (UID: "e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.162353 4775 generic.go:334] "Generic (PLEG): container finished" podID="e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" containerID="9dbcbaf223b4a38af03b36bd75aba234f0e7d955d3d4a176fdf0d97e61628390" exitCode=0 Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.162391 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4","Type":"ContainerDied","Data":"9dbcbaf223b4a38af03b36bd75aba234f0e7d955d3d4a176fdf0d97e61628390"} Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.162416 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4","Type":"ContainerDied","Data":"065fc1b5797daf455a3cdfa43364cc86bfb5828f1d6406c743c4420544e27210"} Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.162432 4775 scope.go:117] "RemoveContainer" containerID="7d9236393d5d7a97a212e532ac549f5666ebc5728a68a6bbc2f8dba010ac271a" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.162508 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.196598 4775 scope.go:117] "RemoveContainer" containerID="9dbcbaf223b4a38af03b36bd75aba234f0e7d955d3d4a176fdf0d97e61628390" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.198867 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.214526 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.219084 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.225232 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 19:53:01 crc kubenswrapper[4775]: E1125 19:53:01.225679 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab881e6-b35e-44b4-adc6-5c176618f3c2" containerName="init" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.225700 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab881e6-b35e-44b4-adc6-5c176618f3c2" containerName="init" Nov 25 19:53:01 crc kubenswrapper[4775]: E1125 19:53:01.225716 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" containerName="cinder-scheduler" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.225725 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" containerName="cinder-scheduler" Nov 25 19:53:01 crc kubenswrapper[4775]: E1125 19:53:01.225757 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a06ed2e-ff90-4b5c-92b1-3e102255820d" containerName="barbican-api" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.225765 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a06ed2e-ff90-4b5c-92b1-3e102255820d" containerName="barbican-api" Nov 25 19:53:01 crc kubenswrapper[4775]: E1125 19:53:01.225775 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2523e8b-c047-426a-908a-5b91c0e764bd" containerName="init" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.225782 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2523e8b-c047-426a-908a-5b91c0e764bd" containerName="init" Nov 25 19:53:01 crc kubenswrapper[4775]: E1125 19:53:01.225801 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2523e8b-c047-426a-908a-5b91c0e764bd" containerName="dnsmasq-dns" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.225808 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2523e8b-c047-426a-908a-5b91c0e764bd" containerName="dnsmasq-dns" Nov 25 19:53:01 crc kubenswrapper[4775]: E1125 19:53:01.225822 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab881e6-b35e-44b4-adc6-5c176618f3c2" containerName="dnsmasq-dns" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.225830 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab881e6-b35e-44b4-adc6-5c176618f3c2" containerName="dnsmasq-dns" Nov 25 19:53:01 crc kubenswrapper[4775]: E1125 19:53:01.225845 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a06ed2e-ff90-4b5c-92b1-3e102255820d" containerName="barbican-api-log" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.225853 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a06ed2e-ff90-4b5c-92b1-3e102255820d" containerName="barbican-api-log" Nov 25 19:53:01 crc kubenswrapper[4775]: E1125 19:53:01.225865 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" containerName="probe" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.225873 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" containerName="probe" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.226071 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" containerName="probe" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.226093 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab881e6-b35e-44b4-adc6-5c176618f3c2" containerName="dnsmasq-dns" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.226104 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" containerName="cinder-scheduler" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.226120 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2523e8b-c047-426a-908a-5b91c0e764bd" containerName="dnsmasq-dns" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.226136 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a06ed2e-ff90-4b5c-92b1-3e102255820d" containerName="barbican-api" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.226154 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a06ed2e-ff90-4b5c-92b1-3e102255820d" containerName="barbican-api-log" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.227274 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.232063 4775 scope.go:117] "RemoveContainer" containerID="7d9236393d5d7a97a212e532ac549f5666ebc5728a68a6bbc2f8dba010ac271a" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.232615 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.234338 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:01 crc kubenswrapper[4775]: E1125 19:53:01.238793 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d9236393d5d7a97a212e532ac549f5666ebc5728a68a6bbc2f8dba010ac271a\": container with ID starting with 7d9236393d5d7a97a212e532ac549f5666ebc5728a68a6bbc2f8dba010ac271a not found: ID does not exist" containerID="7d9236393d5d7a97a212e532ac549f5666ebc5728a68a6bbc2f8dba010ac271a" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.238840 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d9236393d5d7a97a212e532ac549f5666ebc5728a68a6bbc2f8dba010ac271a"} err="failed to get container status \"7d9236393d5d7a97a212e532ac549f5666ebc5728a68a6bbc2f8dba010ac271a\": rpc error: code = NotFound desc = could not find container \"7d9236393d5d7a97a212e532ac549f5666ebc5728a68a6bbc2f8dba010ac271a\": container with ID starting with 7d9236393d5d7a97a212e532ac549f5666ebc5728a68a6bbc2f8dba010ac271a not found: ID does not exist" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.238865 4775 scope.go:117] "RemoveContainer" containerID="9dbcbaf223b4a38af03b36bd75aba234f0e7d955d3d4a176fdf0d97e61628390" Nov 25 19:53:01 crc kubenswrapper[4775]: E1125 19:53:01.240380 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dbcbaf223b4a38af03b36bd75aba234f0e7d955d3d4a176fdf0d97e61628390\": container with ID starting with 9dbcbaf223b4a38af03b36bd75aba234f0e7d955d3d4a176fdf0d97e61628390 not found: ID does not exist" containerID="9dbcbaf223b4a38af03b36bd75aba234f0e7d955d3d4a176fdf0d97e61628390" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.240418 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dbcbaf223b4a38af03b36bd75aba234f0e7d955d3d4a176fdf0d97e61628390"} err="failed to get container status \"9dbcbaf223b4a38af03b36bd75aba234f0e7d955d3d4a176fdf0d97e61628390\": rpc error: code = NotFound desc = could not find container \"9dbcbaf223b4a38af03b36bd75aba234f0e7d955d3d4a176fdf0d97e61628390\": container with ID starting with 9dbcbaf223b4a38af03b36bd75aba234f0e7d955d3d4a176fdf0d97e61628390 not found: ID does not exist" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.243031 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.337393 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7045afb5-9467-4dd9-8d42-257ef82590d2-config-data\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.337468 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7045afb5-9467-4dd9-8d42-257ef82590d2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.337601 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7045afb5-9467-4dd9-8d42-257ef82590d2-scripts\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.337761 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7045afb5-9467-4dd9-8d42-257ef82590d2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.337820 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7045afb5-9467-4dd9-8d42-257ef82590d2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.337924 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqshv\" (UniqueName: \"kubernetes.io/projected/7045afb5-9467-4dd9-8d42-257ef82590d2-kube-api-access-pqshv\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.439104 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7045afb5-9467-4dd9-8d42-257ef82590d2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.439159 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7045afb5-9467-4dd9-8d42-257ef82590d2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.439218 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqshv\" (UniqueName: \"kubernetes.io/projected/7045afb5-9467-4dd9-8d42-257ef82590d2-kube-api-access-pqshv\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.439278 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7045afb5-9467-4dd9-8d42-257ef82590d2-config-data\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.439328 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7045afb5-9467-4dd9-8d42-257ef82590d2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.439360 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7045afb5-9467-4dd9-8d42-257ef82590d2-scripts\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.439625 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7045afb5-9467-4dd9-8d42-257ef82590d2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.444711 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7045afb5-9467-4dd9-8d42-257ef82590d2-config-data\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.444878 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7045afb5-9467-4dd9-8d42-257ef82590d2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.444892 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7045afb5-9467-4dd9-8d42-257ef82590d2-scripts\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.445002 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7045afb5-9467-4dd9-8d42-257ef82590d2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.458165 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqshv\" (UniqueName: \"kubernetes.io/projected/7045afb5-9467-4dd9-8d42-257ef82590d2-kube-api-access-pqshv\") pod \"cinder-scheduler-0\" (UID: \"7045afb5-9467-4dd9-8d42-257ef82590d2\") " pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.562895 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f9dd67654-p257f" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.567100 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 19:53:01 crc kubenswrapper[4775]: I1125 19:53:01.738424 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-857879b544-hkmfq" Nov 25 19:53:02 crc kubenswrapper[4775]: I1125 19:53:02.091034 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 19:53:02 crc kubenswrapper[4775]: I1125 19:53:02.171757 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7045afb5-9467-4dd9-8d42-257ef82590d2","Type":"ContainerStarted","Data":"a80cd9051e6d86b25dc47612db7f826814422184bad957f790518a9601e83dc1"} Nov 25 19:53:02 crc kubenswrapper[4775]: I1125 19:53:02.860151 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4" path="/var/lib/kubelet/pods/e5039a8a-4a7f-468b-8cc5-c8eb1a02f2d4/volumes" Nov 25 19:53:03 crc kubenswrapper[4775]: I1125 19:53:03.182622 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7045afb5-9467-4dd9-8d42-257ef82590d2","Type":"ContainerStarted","Data":"5791a44ee1cb730151ee4f77aca21c2a335c32c12b6908da2a078e9e3bf675f7"} Nov 25 19:53:03 crc kubenswrapper[4775]: I1125 19:53:03.549164 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5fd788f7d7-czrxl" Nov 25 19:53:03 crc kubenswrapper[4775]: I1125 19:53:03.630413 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f8dccbcd8-gkj6h"] Nov 25 19:53:03 crc kubenswrapper[4775]: I1125 19:53:03.630707 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f8dccbcd8-gkj6h" podUID="0b683337-9591-44fc-8815-898878abf387" containerName="neutron-api" containerID="cri-o://34fa1d3e31f5a23de2c8564da649d0861c5ab1a3bb99333d690c0a3747987909" gracePeriod=30 Nov 25 19:53:03 crc kubenswrapper[4775]: I1125 19:53:03.630793 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f8dccbcd8-gkj6h" podUID="0b683337-9591-44fc-8815-898878abf387" containerName="neutron-httpd" containerID="cri-o://a424cfcdcb7ed66a5df7ce811658598803456291bc0c6d5b6832794e3d1bd219" gracePeriod=30 Nov 25 19:53:04 crc kubenswrapper[4775]: I1125 19:53:04.192627 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7045afb5-9467-4dd9-8d42-257ef82590d2","Type":"ContainerStarted","Data":"55dc71a3e2ce80e494c0f997ec785c222d53161f40141a3d6dfdb7b6c623e659"} Nov 25 19:53:04 crc kubenswrapper[4775]: I1125 19:53:04.194725 4775 generic.go:334] "Generic (PLEG): container finished" podID="0b683337-9591-44fc-8815-898878abf387" containerID="a424cfcdcb7ed66a5df7ce811658598803456291bc0c6d5b6832794e3d1bd219" exitCode=0 Nov 25 19:53:04 crc kubenswrapper[4775]: I1125 19:53:04.194786 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f8dccbcd8-gkj6h" event={"ID":"0b683337-9591-44fc-8815-898878abf387","Type":"ContainerDied","Data":"a424cfcdcb7ed66a5df7ce811658598803456291bc0c6d5b6832794e3d1bd219"} Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.071405 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.071387018 podStartE2EDuration="4.071387018s" podCreationTimestamp="2025-11-25 19:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:53:04.213022497 +0000 UTC m=+1166.129384863" watchObservedRunningTime="2025-11-25 19:53:05.071387018 +0000 UTC m=+1166.987749384" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.072164 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.073440 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.075342 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-fwnbj" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.076483 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.079968 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.093284 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.204901 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a36126d8-22b4-46b4-aa24-c02eba72023e-openstack-config-secret\") pod \"openstackclient\" (UID: \"a36126d8-22b4-46b4-aa24-c02eba72023e\") " pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.204965 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c52d\" (UniqueName: \"kubernetes.io/projected/a36126d8-22b4-46b4-aa24-c02eba72023e-kube-api-access-7c52d\") pod \"openstackclient\" (UID: \"a36126d8-22b4-46b4-aa24-c02eba72023e\") " pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.205015 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a36126d8-22b4-46b4-aa24-c02eba72023e-openstack-config\") pod \"openstackclient\" (UID: \"a36126d8-22b4-46b4-aa24-c02eba72023e\") " pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.205042 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36126d8-22b4-46b4-aa24-c02eba72023e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a36126d8-22b4-46b4-aa24-c02eba72023e\") " pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.306238 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a36126d8-22b4-46b4-aa24-c02eba72023e-openstack-config-secret\") pod \"openstackclient\" (UID: \"a36126d8-22b4-46b4-aa24-c02eba72023e\") " pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.306295 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c52d\" (UniqueName: \"kubernetes.io/projected/a36126d8-22b4-46b4-aa24-c02eba72023e-kube-api-access-7c52d\") pod \"openstackclient\" (UID: \"a36126d8-22b4-46b4-aa24-c02eba72023e\") " pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.306316 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a36126d8-22b4-46b4-aa24-c02eba72023e-openstack-config\") pod \"openstackclient\" (UID: \"a36126d8-22b4-46b4-aa24-c02eba72023e\") " pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.306343 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36126d8-22b4-46b4-aa24-c02eba72023e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a36126d8-22b4-46b4-aa24-c02eba72023e\") " pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.307775 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a36126d8-22b4-46b4-aa24-c02eba72023e-openstack-config\") pod \"openstackclient\" (UID: \"a36126d8-22b4-46b4-aa24-c02eba72023e\") " pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.315829 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36126d8-22b4-46b4-aa24-c02eba72023e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a36126d8-22b4-46b4-aa24-c02eba72023e\") " pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.316506 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a36126d8-22b4-46b4-aa24-c02eba72023e-openstack-config-secret\") pod \"openstackclient\" (UID: \"a36126d8-22b4-46b4-aa24-c02eba72023e\") " pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.324283 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c52d\" (UniqueName: \"kubernetes.io/projected/a36126d8-22b4-46b4-aa24-c02eba72023e-kube-api-access-7c52d\") pod \"openstackclient\" (UID: \"a36126d8-22b4-46b4-aa24-c02eba72023e\") " pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.407283 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 19:53:05 crc kubenswrapper[4775]: I1125 19:53:05.901251 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 19:53:05 crc kubenswrapper[4775]: W1125 19:53:05.906665 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda36126d8_22b4_46b4_aa24_c02eba72023e.slice/crio-3518a764898de9d39500b855dbf2a0748ef0e13aa8d83608dab6dc8e5d11b2d9 WatchSource:0}: Error finding container 3518a764898de9d39500b855dbf2a0748ef0e13aa8d83608dab6dc8e5d11b2d9: Status 404 returned error can't find the container with id 3518a764898de9d39500b855dbf2a0748ef0e13aa8d83608dab6dc8e5d11b2d9 Nov 25 19:53:06 crc kubenswrapper[4775]: I1125 19:53:06.212066 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a36126d8-22b4-46b4-aa24-c02eba72023e","Type":"ContainerStarted","Data":"3518a764898de9d39500b855dbf2a0748ef0e13aa8d83608dab6dc8e5d11b2d9"} Nov 25 19:53:06 crc kubenswrapper[4775]: I1125 19:53:06.568077 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 19:53:10 crc kubenswrapper[4775]: I1125 19:53:10.258019 4775 generic.go:334] "Generic (PLEG): container finished" podID="0b683337-9591-44fc-8815-898878abf387" containerID="34fa1d3e31f5a23de2c8564da649d0861c5ab1a3bb99333d690c0a3747987909" exitCode=0 Nov 25 19:53:10 crc kubenswrapper[4775]: I1125 19:53:10.258120 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f8dccbcd8-gkj6h" event={"ID":"0b683337-9591-44fc-8815-898878abf387","Type":"ContainerDied","Data":"34fa1d3e31f5a23de2c8564da649d0861c5ab1a3bb99333d690c0a3747987909"} Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.072426 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.072477 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.072514 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.073136 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9d57ed892e1f28c6ded4ad19e2041e94c1c82f1ac3bd35631b061f8f7717302b"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.073191 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://9d57ed892e1f28c6ded4ad19e2041e94c1c82f1ac3bd35631b061f8f7717302b" gracePeriod=600 Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.274481 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="9d57ed892e1f28c6ded4ad19e2041e94c1c82f1ac3bd35631b061f8f7717302b" exitCode=0 Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.274521 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"9d57ed892e1f28c6ded4ad19e2041e94c1c82f1ac3bd35631b061f8f7717302b"} Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.274557 4775 scope.go:117] "RemoveContainer" containerID="7b6dd9da01186a3ce4b866b2112d5b02fb5c358a2952aa59bf01efd8cd71d7aa" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.335159 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.672955 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-m7qwp"] Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.674227 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m7qwp" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.686835 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-m7qwp"] Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.748714 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-zrsbv"] Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.749724 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zrsbv" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.753825 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zrsbv"] Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.809826 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8cpt\" (UniqueName: \"kubernetes.io/projected/2df0ca5f-5a5f-4186-8d30-66d7dabaa31c-kube-api-access-r8cpt\") pod \"nova-api-db-create-m7qwp\" (UID: \"2df0ca5f-5a5f-4186-8d30-66d7dabaa31c\") " pod="openstack/nova-api-db-create-m7qwp" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.809889 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2df0ca5f-5a5f-4186-8d30-66d7dabaa31c-operator-scripts\") pod \"nova-api-db-create-m7qwp\" (UID: \"2df0ca5f-5a5f-4186-8d30-66d7dabaa31c\") " pod="openstack/nova-api-db-create-m7qwp" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.851468 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-76p9s"] Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.852608 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-76p9s" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.864938 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6a37-account-create-update-2pzhq"] Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.866013 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6a37-account-create-update-2pzhq" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.870225 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.874093 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6a37-account-create-update-2pzhq"] Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.881467 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-76p9s"] Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.899792 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.912584 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2df0ca5f-5a5f-4186-8d30-66d7dabaa31c-operator-scripts\") pod \"nova-api-db-create-m7qwp\" (UID: \"2df0ca5f-5a5f-4186-8d30-66d7dabaa31c\") " pod="openstack/nova-api-db-create-m7qwp" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.914519 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2df0ca5f-5a5f-4186-8d30-66d7dabaa31c-operator-scripts\") pod \"nova-api-db-create-m7qwp\" (UID: \"2df0ca5f-5a5f-4186-8d30-66d7dabaa31c\") " pod="openstack/nova-api-db-create-m7qwp" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.914732 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d1bc0cf-f7e6-4621-9098-b78e16900a73-operator-scripts\") pod \"nova-cell0-db-create-zrsbv\" (UID: \"9d1bc0cf-f7e6-4621-9098-b78e16900a73\") " pod="openstack/nova-cell0-db-create-zrsbv" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.914803 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z27r\" (UniqueName: \"kubernetes.io/projected/9d1bc0cf-f7e6-4621-9098-b78e16900a73-kube-api-access-8z27r\") pod \"nova-cell0-db-create-zrsbv\" (UID: \"9d1bc0cf-f7e6-4621-9098-b78e16900a73\") " pod="openstack/nova-cell0-db-create-zrsbv" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.914831 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8cpt\" (UniqueName: \"kubernetes.io/projected/2df0ca5f-5a5f-4186-8d30-66d7dabaa31c-kube-api-access-r8cpt\") pod \"nova-api-db-create-m7qwp\" (UID: \"2df0ca5f-5a5f-4186-8d30-66d7dabaa31c\") " pod="openstack/nova-api-db-create-m7qwp" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.937298 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8cpt\" (UniqueName: \"kubernetes.io/projected/2df0ca5f-5a5f-4186-8d30-66d7dabaa31c-kube-api-access-r8cpt\") pod \"nova-api-db-create-m7qwp\" (UID: \"2df0ca5f-5a5f-4186-8d30-66d7dabaa31c\") " pod="openstack/nova-api-db-create-m7qwp" Nov 25 19:53:11 crc kubenswrapper[4775]: I1125 19:53:11.990276 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m7qwp" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.016336 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9f5p\" (UniqueName: \"kubernetes.io/projected/48ccf429-6851-44f1-8370-b75877bbaa53-kube-api-access-f9f5p\") pod \"nova-cell1-db-create-76p9s\" (UID: \"48ccf429-6851-44f1-8370-b75877bbaa53\") " pod="openstack/nova-cell1-db-create-76p9s" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.016393 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z27r\" (UniqueName: \"kubernetes.io/projected/9d1bc0cf-f7e6-4621-9098-b78e16900a73-kube-api-access-8z27r\") pod \"nova-cell0-db-create-zrsbv\" (UID: \"9d1bc0cf-f7e6-4621-9098-b78e16900a73\") " pod="openstack/nova-cell0-db-create-zrsbv" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.016462 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcsml\" (UniqueName: \"kubernetes.io/projected/731e8277-1449-4718-af2c-ded02ecfe4a9-kube-api-access-pcsml\") pod \"nova-api-6a37-account-create-update-2pzhq\" (UID: \"731e8277-1449-4718-af2c-ded02ecfe4a9\") " pod="openstack/nova-api-6a37-account-create-update-2pzhq" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.016590 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/731e8277-1449-4718-af2c-ded02ecfe4a9-operator-scripts\") pod \"nova-api-6a37-account-create-update-2pzhq\" (UID: \"731e8277-1449-4718-af2c-ded02ecfe4a9\") " pod="openstack/nova-api-6a37-account-create-update-2pzhq" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.016632 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ccf429-6851-44f1-8370-b75877bbaa53-operator-scripts\") pod \"nova-cell1-db-create-76p9s\" (UID: \"48ccf429-6851-44f1-8370-b75877bbaa53\") " pod="openstack/nova-cell1-db-create-76p9s" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.016674 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d1bc0cf-f7e6-4621-9098-b78e16900a73-operator-scripts\") pod \"nova-cell0-db-create-zrsbv\" (UID: \"9d1bc0cf-f7e6-4621-9098-b78e16900a73\") " pod="openstack/nova-cell0-db-create-zrsbv" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.017296 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d1bc0cf-f7e6-4621-9098-b78e16900a73-operator-scripts\") pod \"nova-cell0-db-create-zrsbv\" (UID: \"9d1bc0cf-f7e6-4621-9098-b78e16900a73\") " pod="openstack/nova-cell0-db-create-zrsbv" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.048204 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z27r\" (UniqueName: \"kubernetes.io/projected/9d1bc0cf-f7e6-4621-9098-b78e16900a73-kube-api-access-8z27r\") pod \"nova-cell0-db-create-zrsbv\" (UID: \"9d1bc0cf-f7e6-4621-9098-b78e16900a73\") " pod="openstack/nova-cell0-db-create-zrsbv" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.071450 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-025d-account-create-update-g9cmz"] Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.072426 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-025d-account-create-update-g9cmz" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.074014 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zrsbv" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.077870 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.083761 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-025d-account-create-update-g9cmz"] Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.119520 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcsml\" (UniqueName: \"kubernetes.io/projected/731e8277-1449-4718-af2c-ded02ecfe4a9-kube-api-access-pcsml\") pod \"nova-api-6a37-account-create-update-2pzhq\" (UID: \"731e8277-1449-4718-af2c-ded02ecfe4a9\") " pod="openstack/nova-api-6a37-account-create-update-2pzhq" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.119613 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/731e8277-1449-4718-af2c-ded02ecfe4a9-operator-scripts\") pod \"nova-api-6a37-account-create-update-2pzhq\" (UID: \"731e8277-1449-4718-af2c-ded02ecfe4a9\") " pod="openstack/nova-api-6a37-account-create-update-2pzhq" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.119656 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ccf429-6851-44f1-8370-b75877bbaa53-operator-scripts\") pod \"nova-cell1-db-create-76p9s\" (UID: \"48ccf429-6851-44f1-8370-b75877bbaa53\") " pod="openstack/nova-cell1-db-create-76p9s" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.119704 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9f5p\" (UniqueName: \"kubernetes.io/projected/48ccf429-6851-44f1-8370-b75877bbaa53-kube-api-access-f9f5p\") pod \"nova-cell1-db-create-76p9s\" (UID: \"48ccf429-6851-44f1-8370-b75877bbaa53\") " pod="openstack/nova-cell1-db-create-76p9s" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.120440 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ccf429-6851-44f1-8370-b75877bbaa53-operator-scripts\") pod \"nova-cell1-db-create-76p9s\" (UID: \"48ccf429-6851-44f1-8370-b75877bbaa53\") " pod="openstack/nova-cell1-db-create-76p9s" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.120466 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/731e8277-1449-4718-af2c-ded02ecfe4a9-operator-scripts\") pod \"nova-api-6a37-account-create-update-2pzhq\" (UID: \"731e8277-1449-4718-af2c-ded02ecfe4a9\") " pod="openstack/nova-api-6a37-account-create-update-2pzhq" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.164251 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9f5p\" (UniqueName: \"kubernetes.io/projected/48ccf429-6851-44f1-8370-b75877bbaa53-kube-api-access-f9f5p\") pod \"nova-cell1-db-create-76p9s\" (UID: \"48ccf429-6851-44f1-8370-b75877bbaa53\") " pod="openstack/nova-cell1-db-create-76p9s" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.164682 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcsml\" (UniqueName: \"kubernetes.io/projected/731e8277-1449-4718-af2c-ded02ecfe4a9-kube-api-access-pcsml\") pod \"nova-api-6a37-account-create-update-2pzhq\" (UID: \"731e8277-1449-4718-af2c-ded02ecfe4a9\") " pod="openstack/nova-api-6a37-account-create-update-2pzhq" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.171062 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-76p9s" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.183008 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6a37-account-create-update-2pzhq" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.221294 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95839f5f-fa72-41a6-88aa-bfd01a5c6571-operator-scripts\") pod \"nova-cell0-025d-account-create-update-g9cmz\" (UID: \"95839f5f-fa72-41a6-88aa-bfd01a5c6571\") " pod="openstack/nova-cell0-025d-account-create-update-g9cmz" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.221364 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvr9v\" (UniqueName: \"kubernetes.io/projected/95839f5f-fa72-41a6-88aa-bfd01a5c6571-kube-api-access-cvr9v\") pod \"nova-cell0-025d-account-create-update-g9cmz\" (UID: \"95839f5f-fa72-41a6-88aa-bfd01a5c6571\") " pod="openstack/nova-cell0-025d-account-create-update-g9cmz" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.253733 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3eb0-account-create-update-m25ts"] Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.254722 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3eb0-account-create-update-m25ts" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.263974 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.271982 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3eb0-account-create-update-m25ts"] Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.323665 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-252vm\" (UniqueName: \"kubernetes.io/projected/0504e23b-4c14-4256-b888-76415dbc9ae6-kube-api-access-252vm\") pod \"nova-cell1-3eb0-account-create-update-m25ts\" (UID: \"0504e23b-4c14-4256-b888-76415dbc9ae6\") " pod="openstack/nova-cell1-3eb0-account-create-update-m25ts" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.323729 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95839f5f-fa72-41a6-88aa-bfd01a5c6571-operator-scripts\") pod \"nova-cell0-025d-account-create-update-g9cmz\" (UID: \"95839f5f-fa72-41a6-88aa-bfd01a5c6571\") " pod="openstack/nova-cell0-025d-account-create-update-g9cmz" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.323760 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504e23b-4c14-4256-b888-76415dbc9ae6-operator-scripts\") pod \"nova-cell1-3eb0-account-create-update-m25ts\" (UID: \"0504e23b-4c14-4256-b888-76415dbc9ae6\") " pod="openstack/nova-cell1-3eb0-account-create-update-m25ts" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.323808 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvr9v\" (UniqueName: \"kubernetes.io/projected/95839f5f-fa72-41a6-88aa-bfd01a5c6571-kube-api-access-cvr9v\") pod \"nova-cell0-025d-account-create-update-g9cmz\" (UID: \"95839f5f-fa72-41a6-88aa-bfd01a5c6571\") " pod="openstack/nova-cell0-025d-account-create-update-g9cmz" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.324551 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95839f5f-fa72-41a6-88aa-bfd01a5c6571-operator-scripts\") pod \"nova-cell0-025d-account-create-update-g9cmz\" (UID: \"95839f5f-fa72-41a6-88aa-bfd01a5c6571\") " pod="openstack/nova-cell0-025d-account-create-update-g9cmz" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.351330 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvr9v\" (UniqueName: \"kubernetes.io/projected/95839f5f-fa72-41a6-88aa-bfd01a5c6571-kube-api-access-cvr9v\") pod \"nova-cell0-025d-account-create-update-g9cmz\" (UID: \"95839f5f-fa72-41a6-88aa-bfd01a5c6571\") " pod="openstack/nova-cell0-025d-account-create-update-g9cmz" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.395191 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-025d-account-create-update-g9cmz" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.425399 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-252vm\" (UniqueName: \"kubernetes.io/projected/0504e23b-4c14-4256-b888-76415dbc9ae6-kube-api-access-252vm\") pod \"nova-cell1-3eb0-account-create-update-m25ts\" (UID: \"0504e23b-4c14-4256-b888-76415dbc9ae6\") " pod="openstack/nova-cell1-3eb0-account-create-update-m25ts" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.425468 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504e23b-4c14-4256-b888-76415dbc9ae6-operator-scripts\") pod \"nova-cell1-3eb0-account-create-update-m25ts\" (UID: \"0504e23b-4c14-4256-b888-76415dbc9ae6\") " pod="openstack/nova-cell1-3eb0-account-create-update-m25ts" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.426187 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504e23b-4c14-4256-b888-76415dbc9ae6-operator-scripts\") pod \"nova-cell1-3eb0-account-create-update-m25ts\" (UID: \"0504e23b-4c14-4256-b888-76415dbc9ae6\") " pod="openstack/nova-cell1-3eb0-account-create-update-m25ts" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.442064 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-252vm\" (UniqueName: \"kubernetes.io/projected/0504e23b-4c14-4256-b888-76415dbc9ae6-kube-api-access-252vm\") pod \"nova-cell1-3eb0-account-create-update-m25ts\" (UID: \"0504e23b-4c14-4256-b888-76415dbc9ae6\") " pod="openstack/nova-cell1-3eb0-account-create-update-m25ts" Nov 25 19:53:12 crc kubenswrapper[4775]: I1125 19:53:12.579042 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3eb0-account-create-update-m25ts" Nov 25 19:53:14 crc kubenswrapper[4775]: I1125 19:53:14.578308 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:14 crc kubenswrapper[4775]: I1125 19:53:14.578932 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="sg-core" containerID="cri-o://cff4db24649b95c244f1504685a3980b70c042bd7cb20920137427b889487cad" gracePeriod=30 Nov 25 19:53:14 crc kubenswrapper[4775]: I1125 19:53:14.578976 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="proxy-httpd" containerID="cri-o://8e7d892a79e29fb2849aab43bfa7240de67629f42a8dc06bf021e25328fd5127" gracePeriod=30 Nov 25 19:53:14 crc kubenswrapper[4775]: I1125 19:53:14.579002 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="ceilometer-notification-agent" containerID="cri-o://de06255136ed863cfb2a96032f0ea0582ab0ef4f2b530de9a39469cb597f9fe4" gracePeriod=30 Nov 25 19:53:14 crc kubenswrapper[4775]: I1125 19:53:14.579214 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="ceilometer-central-agent" containerID="cri-o://f09082f9852506fa895d7821b1c13f8074f430a1d4ea1d577368699559e9f0e7" gracePeriod=30 Nov 25 19:53:15 crc kubenswrapper[4775]: I1125 19:53:15.311401 4775 generic.go:334] "Generic (PLEG): container finished" podID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerID="8e7d892a79e29fb2849aab43bfa7240de67629f42a8dc06bf021e25328fd5127" exitCode=0 Nov 25 19:53:15 crc kubenswrapper[4775]: I1125 19:53:15.311789 4775 generic.go:334] "Generic (PLEG): container finished" podID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerID="cff4db24649b95c244f1504685a3980b70c042bd7cb20920137427b889487cad" exitCode=2 Nov 25 19:53:15 crc kubenswrapper[4775]: I1125 19:53:15.311801 4775 generic.go:334] "Generic (PLEG): container finished" podID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerID="de06255136ed863cfb2a96032f0ea0582ab0ef4f2b530de9a39469cb597f9fe4" exitCode=0 Nov 25 19:53:15 crc kubenswrapper[4775]: I1125 19:53:15.311809 4775 generic.go:334] "Generic (PLEG): container finished" podID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerID="f09082f9852506fa895d7821b1c13f8074f430a1d4ea1d577368699559e9f0e7" exitCode=0 Nov 25 19:53:15 crc kubenswrapper[4775]: I1125 19:53:15.311500 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a","Type":"ContainerDied","Data":"8e7d892a79e29fb2849aab43bfa7240de67629f42a8dc06bf021e25328fd5127"} Nov 25 19:53:15 crc kubenswrapper[4775]: I1125 19:53:15.311868 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a","Type":"ContainerDied","Data":"cff4db24649b95c244f1504685a3980b70c042bd7cb20920137427b889487cad"} Nov 25 19:53:15 crc kubenswrapper[4775]: I1125 19:53:15.311885 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a","Type":"ContainerDied","Data":"de06255136ed863cfb2a96032f0ea0582ab0ef4f2b530de9a39469cb597f9fe4"} Nov 25 19:53:15 crc kubenswrapper[4775]: I1125 19:53:15.311898 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a","Type":"ContainerDied","Data":"f09082f9852506fa895d7821b1c13f8074f430a1d4ea1d577368699559e9f0e7"} Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:15.999571 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.088514 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-ovndb-tls-certs\") pod \"0b683337-9591-44fc-8815-898878abf387\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.088575 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-combined-ca-bundle\") pod \"0b683337-9591-44fc-8815-898878abf387\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.088697 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-httpd-config\") pod \"0b683337-9591-44fc-8815-898878abf387\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.088768 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfxrm\" (UniqueName: \"kubernetes.io/projected/0b683337-9591-44fc-8815-898878abf387-kube-api-access-xfxrm\") pod \"0b683337-9591-44fc-8815-898878abf387\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.088792 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-config\") pod \"0b683337-9591-44fc-8815-898878abf387\" (UID: \"0b683337-9591-44fc-8815-898878abf387\") " Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.098413 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0b683337-9591-44fc-8815-898878abf387" (UID: "0b683337-9591-44fc-8815-898878abf387"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.100828 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b683337-9591-44fc-8815-898878abf387-kube-api-access-xfxrm" (OuterVolumeSpecName: "kube-api-access-xfxrm") pod "0b683337-9591-44fc-8815-898878abf387" (UID: "0b683337-9591-44fc-8815-898878abf387"). InnerVolumeSpecName "kube-api-access-xfxrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.165458 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b683337-9591-44fc-8815-898878abf387" (UID: "0b683337-9591-44fc-8815-898878abf387"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.167691 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-config" (OuterVolumeSpecName: "config") pod "0b683337-9591-44fc-8815-898878abf387" (UID: "0b683337-9591-44fc-8815-898878abf387"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.188534 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0b683337-9591-44fc-8815-898878abf387" (UID: "0b683337-9591-44fc-8815-898878abf387"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.192864 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfxrm\" (UniqueName: \"kubernetes.io/projected/0b683337-9591-44fc-8815-898878abf387-kube-api-access-xfxrm\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.192898 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.192915 4775 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.192926 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.192938 4775 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b683337-9591-44fc-8815-898878abf387-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.354477 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f8dccbcd8-gkj6h" event={"ID":"0b683337-9591-44fc-8815-898878abf387","Type":"ContainerDied","Data":"d6699d7ec90a951375a3f39415b0f9a397400c8afbec0f2220122cf9e7838c3c"} Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.354530 4775 scope.go:117] "RemoveContainer" containerID="a424cfcdcb7ed66a5df7ce811658598803456291bc0c6d5b6832794e3d1bd219" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.354641 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f8dccbcd8-gkj6h" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.438165 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.456530 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f8dccbcd8-gkj6h"] Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.470115 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6f8dccbcd8-gkj6h"] Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.474176 4775 scope.go:117] "RemoveContainer" containerID="34fa1d3e31f5a23de2c8564da649d0861c5ab1a3bb99333d690c0a3747987909" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.603230 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-run-httpd\") pod \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.603294 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-log-httpd\") pod \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.603395 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-sg-core-conf-yaml\") pod \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.603430 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-combined-ca-bundle\") pod \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.603490 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-config-data\") pod \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.603528 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nflgl\" (UniqueName: \"kubernetes.io/projected/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-kube-api-access-nflgl\") pod \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.603613 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-scripts\") pod \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\" (UID: \"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a\") " Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.608963 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" (UID: "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.609177 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" (UID: "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.611411 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-scripts" (OuterVolumeSpecName: "scripts") pod "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" (UID: "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.612424 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-kube-api-access-nflgl" (OuterVolumeSpecName: "kube-api-access-nflgl") pod "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" (UID: "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a"). InnerVolumeSpecName "kube-api-access-nflgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.629992 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-m7qwp"] Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.664197 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" (UID: "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.705571 4775 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.705600 4775 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.705610 4775 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.705623 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nflgl\" (UniqueName: \"kubernetes.io/projected/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-kube-api-access-nflgl\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.705634 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.782804 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" (UID: "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.810159 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.831220 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-config-data" (OuterVolumeSpecName: "config-data") pod "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" (UID: "df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:16 crc kubenswrapper[4775]: W1125 19:53:16.834294 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95839f5f_fa72_41a6_88aa_bfd01a5c6571.slice/crio-019f1c7bced0319032f9342a2bf1ae65587fdc0e412b7be461605cbaf0dc29a4 WatchSource:0}: Error finding container 019f1c7bced0319032f9342a2bf1ae65587fdc0e412b7be461605cbaf0dc29a4: Status 404 returned error can't find the container with id 019f1c7bced0319032f9342a2bf1ae65587fdc0e412b7be461605cbaf0dc29a4 Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.843172 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-025d-account-create-update-g9cmz"] Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.886984 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b683337-9591-44fc-8815-898878abf387" path="/var/lib/kubelet/pods/0b683337-9591-44fc-8815-898878abf387/volumes" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.887793 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-76p9s"] Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.912519 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:16 crc kubenswrapper[4775]: I1125 19:53:16.994030 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6a37-account-create-update-2pzhq"] Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.001622 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3eb0-account-create-update-m25ts"] Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.013185 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zrsbv"] Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.380057 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"cca31bd7d1401819a1ce35374bd96d54908cbd0258987317dc941bcce28d9472"} Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.382826 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-025d-account-create-update-g9cmz" event={"ID":"95839f5f-fa72-41a6-88aa-bfd01a5c6571","Type":"ContainerStarted","Data":"019f1c7bced0319032f9342a2bf1ae65587fdc0e412b7be461605cbaf0dc29a4"} Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.384922 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6a37-account-create-update-2pzhq" event={"ID":"731e8277-1449-4718-af2c-ded02ecfe4a9","Type":"ContainerStarted","Data":"8a78830782065434b93cacec85067eccbe6970713d72d39624e9aee4d3f0f4b2"} Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.386619 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-76p9s" event={"ID":"48ccf429-6851-44f1-8370-b75877bbaa53","Type":"ContainerStarted","Data":"d9babb4088c262ea3b3f2900253eaa6efa17639108a62bc7884435531c114894"} Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.386721 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-76p9s" event={"ID":"48ccf429-6851-44f1-8370-b75877bbaa53","Type":"ContainerStarted","Data":"164ae74f545a3b25deef8d479bb187a5d3d8eb24068edae03b5003d9f224c6f7"} Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.388323 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zrsbv" event={"ID":"9d1bc0cf-f7e6-4621-9098-b78e16900a73","Type":"ContainerStarted","Data":"95b75b83f356ba260024b8849cdedc2f599dbe7f809509017972cae5e1dad99c"} Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.390118 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a","Type":"ContainerDied","Data":"5998b4b869f29df97f53a80f8e975c6e17885f4021589704c6d5e1fc1a6b3828"} Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.390148 4775 scope.go:117] "RemoveContainer" containerID="8e7d892a79e29fb2849aab43bfa7240de67629f42a8dc06bf021e25328fd5127" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.390224 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.391969 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3eb0-account-create-update-m25ts" event={"ID":"0504e23b-4c14-4256-b888-76415dbc9ae6","Type":"ContainerStarted","Data":"0767daab84361c25247857cff79392cf6def152aaaf1c35d88272c56a93c9ad7"} Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.395670 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a36126d8-22b4-46b4-aa24-c02eba72023e","Type":"ContainerStarted","Data":"dd3fb7409db657b461648e34b45615274fee0c1242b2fc17a16b673060fd59f2"} Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.397934 4775 generic.go:334] "Generic (PLEG): container finished" podID="2df0ca5f-5a5f-4186-8d30-66d7dabaa31c" containerID="55d1fc8d4d796fd0073c5037e3dc9aa4db434991abee62afa5bd6a014e4161fd" exitCode=0 Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.397974 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-m7qwp" event={"ID":"2df0ca5f-5a5f-4186-8d30-66d7dabaa31c","Type":"ContainerDied","Data":"55d1fc8d4d796fd0073c5037e3dc9aa4db434991abee62afa5bd6a014e4161fd"} Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.397992 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-m7qwp" event={"ID":"2df0ca5f-5a5f-4186-8d30-66d7dabaa31c","Type":"ContainerStarted","Data":"640498d08baff6b362534cfb8deeb1787f2f95515d6daba83b457ce396aad900"} Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.413976 4775 scope.go:117] "RemoveContainer" containerID="cff4db24649b95c244f1504685a3980b70c042bd7cb20920137427b889487cad" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.420908 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.313535019 podStartE2EDuration="12.420889837s" podCreationTimestamp="2025-11-25 19:53:05 +0000 UTC" firstStartedPulling="2025-11-25 19:53:05.909771891 +0000 UTC m=+1167.826134257" lastFinishedPulling="2025-11-25 19:53:16.017126709 +0000 UTC m=+1177.933489075" observedRunningTime="2025-11-25 19:53:17.419481519 +0000 UTC m=+1179.335843885" watchObservedRunningTime="2025-11-25 19:53:17.420889837 +0000 UTC m=+1179.337252203" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.435801 4775 scope.go:117] "RemoveContainer" containerID="de06255136ed863cfb2a96032f0ea0582ab0ef4f2b530de9a39469cb597f9fe4" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.468423 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.474798 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.476384 4775 scope.go:117] "RemoveContainer" containerID="f09082f9852506fa895d7821b1c13f8074f430a1d4ea1d577368699559e9f0e7" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.483975 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:17 crc kubenswrapper[4775]: E1125 19:53:17.484394 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b683337-9591-44fc-8815-898878abf387" containerName="neutron-api" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.484413 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b683337-9591-44fc-8815-898878abf387" containerName="neutron-api" Nov 25 19:53:17 crc kubenswrapper[4775]: E1125 19:53:17.484432 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="ceilometer-central-agent" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.484441 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="ceilometer-central-agent" Nov 25 19:53:17 crc kubenswrapper[4775]: E1125 19:53:17.484451 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="ceilometer-notification-agent" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.484459 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="ceilometer-notification-agent" Nov 25 19:53:17 crc kubenswrapper[4775]: E1125 19:53:17.484470 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b683337-9591-44fc-8815-898878abf387" containerName="neutron-httpd" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.484476 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b683337-9591-44fc-8815-898878abf387" containerName="neutron-httpd" Nov 25 19:53:17 crc kubenswrapper[4775]: E1125 19:53:17.484494 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="sg-core" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.484501 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="sg-core" Nov 25 19:53:17 crc kubenswrapper[4775]: E1125 19:53:17.484520 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="proxy-httpd" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.484526 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="proxy-httpd" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.484697 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b683337-9591-44fc-8815-898878abf387" containerName="neutron-httpd" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.484713 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b683337-9591-44fc-8815-898878abf387" containerName="neutron-api" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.484724 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="sg-core" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.484734 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="ceilometer-central-agent" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.484750 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="ceilometer-notification-agent" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.484759 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" containerName="proxy-httpd" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.486576 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.489683 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.497620 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.502581 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.525722 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-76p9s" podStartSLOduration=6.525704032 podStartE2EDuration="6.525704032s" podCreationTimestamp="2025-11-25 19:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:53:17.522511835 +0000 UTC m=+1179.438874201" watchObservedRunningTime="2025-11-25 19:53:17.525704032 +0000 UTC m=+1179.442066398" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.625262 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-scripts\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.625320 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.625352 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d130916-7d97-4a78-8db5-29b19b4b896a-log-httpd\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.625383 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d130916-7d97-4a78-8db5-29b19b4b896a-run-httpd\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.625403 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-config-data\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.625424 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtkrr\" (UniqueName: \"kubernetes.io/projected/2d130916-7d97-4a78-8db5-29b19b4b896a-kube-api-access-jtkrr\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.625447 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.727351 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-scripts\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.727427 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.727463 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d130916-7d97-4a78-8db5-29b19b4b896a-log-httpd\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.727497 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d130916-7d97-4a78-8db5-29b19b4b896a-run-httpd\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.727517 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-config-data\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.727542 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtkrr\" (UniqueName: \"kubernetes.io/projected/2d130916-7d97-4a78-8db5-29b19b4b896a-kube-api-access-jtkrr\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.727575 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.728880 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d130916-7d97-4a78-8db5-29b19b4b896a-log-httpd\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.729031 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d130916-7d97-4a78-8db5-29b19b4b896a-run-httpd\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.735083 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-scripts\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.735750 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.736867 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-config-data\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.738065 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.746133 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtkrr\" (UniqueName: \"kubernetes.io/projected/2d130916-7d97-4a78-8db5-29b19b4b896a-kube-api-access-jtkrr\") pod \"ceilometer-0\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " pod="openstack/ceilometer-0" Nov 25 19:53:17 crc kubenswrapper[4775]: I1125 19:53:17.823265 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.281508 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.304558 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:18 crc kubenswrapper[4775]: W1125 19:53:18.311758 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d130916_7d97_4a78_8db5_29b19b4b896a.slice/crio-6bbc44eefded663a901fc537cec1055134da168eb040cfebd8080982d771dcec WatchSource:0}: Error finding container 6bbc44eefded663a901fc537cec1055134da168eb040cfebd8080982d771dcec: Status 404 returned error can't find the container with id 6bbc44eefded663a901fc537cec1055134da168eb040cfebd8080982d771dcec Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.407691 4775 generic.go:334] "Generic (PLEG): container finished" podID="9d1bc0cf-f7e6-4621-9098-b78e16900a73" containerID="661166203c6c2c35526ca81182541281dc54660f35d0bd42530fef2716974ff9" exitCode=0 Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.408038 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zrsbv" event={"ID":"9d1bc0cf-f7e6-4621-9098-b78e16900a73","Type":"ContainerDied","Data":"661166203c6c2c35526ca81182541281dc54660f35d0bd42530fef2716974ff9"} Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.412798 4775 generic.go:334] "Generic (PLEG): container finished" podID="731e8277-1449-4718-af2c-ded02ecfe4a9" containerID="0f513807f0f7afe4a27a21a2491b85d403760c882e2f04c135090e9d96c3aeb7" exitCode=0 Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.412840 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6a37-account-create-update-2pzhq" event={"ID":"731e8277-1449-4718-af2c-ded02ecfe4a9","Type":"ContainerDied","Data":"0f513807f0f7afe4a27a21a2491b85d403760c882e2f04c135090e9d96c3aeb7"} Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.415083 4775 generic.go:334] "Generic (PLEG): container finished" podID="48ccf429-6851-44f1-8370-b75877bbaa53" containerID="d9babb4088c262ea3b3f2900253eaa6efa17639108a62bc7884435531c114894" exitCode=0 Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.415108 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-76p9s" event={"ID":"48ccf429-6851-44f1-8370-b75877bbaa53","Type":"ContainerDied","Data":"d9babb4088c262ea3b3f2900253eaa6efa17639108a62bc7884435531c114894"} Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.416243 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d130916-7d97-4a78-8db5-29b19b4b896a","Type":"ContainerStarted","Data":"6bbc44eefded663a901fc537cec1055134da168eb040cfebd8080982d771dcec"} Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.417999 4775 generic.go:334] "Generic (PLEG): container finished" podID="0504e23b-4c14-4256-b888-76415dbc9ae6" containerID="dac9aa04f4cbb44b49ec480df99b84c0f636fdcfdfcafe305d708c1fb805603e" exitCode=0 Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.418026 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3eb0-account-create-update-m25ts" event={"ID":"0504e23b-4c14-4256-b888-76415dbc9ae6","Type":"ContainerDied","Data":"dac9aa04f4cbb44b49ec480df99b84c0f636fdcfdfcafe305d708c1fb805603e"} Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.420848 4775 generic.go:334] "Generic (PLEG): container finished" podID="95839f5f-fa72-41a6-88aa-bfd01a5c6571" containerID="1fa0be2c96f86f9443d4d1b18a657c009f1519093ba208c8d9352159b94e8bb5" exitCode=0 Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.421701 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-025d-account-create-update-g9cmz" event={"ID":"95839f5f-fa72-41a6-88aa-bfd01a5c6571","Type":"ContainerDied","Data":"1fa0be2c96f86f9443d4d1b18a657c009f1519093ba208c8d9352159b94e8bb5"} Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.870938 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a" path="/var/lib/kubelet/pods/df7c3214-aa6b-4fd4-94fa-46e06cb2fd4a/volumes" Nov 25 19:53:18 crc kubenswrapper[4775]: I1125 19:53:18.970238 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m7qwp" Nov 25 19:53:19 crc kubenswrapper[4775]: I1125 19:53:19.054506 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2df0ca5f-5a5f-4186-8d30-66d7dabaa31c-operator-scripts\") pod \"2df0ca5f-5a5f-4186-8d30-66d7dabaa31c\" (UID: \"2df0ca5f-5a5f-4186-8d30-66d7dabaa31c\") " Nov 25 19:53:19 crc kubenswrapper[4775]: I1125 19:53:19.054624 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8cpt\" (UniqueName: \"kubernetes.io/projected/2df0ca5f-5a5f-4186-8d30-66d7dabaa31c-kube-api-access-r8cpt\") pod \"2df0ca5f-5a5f-4186-8d30-66d7dabaa31c\" (UID: \"2df0ca5f-5a5f-4186-8d30-66d7dabaa31c\") " Nov 25 19:53:19 crc kubenswrapper[4775]: I1125 19:53:19.055093 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2df0ca5f-5a5f-4186-8d30-66d7dabaa31c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2df0ca5f-5a5f-4186-8d30-66d7dabaa31c" (UID: "2df0ca5f-5a5f-4186-8d30-66d7dabaa31c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:53:19 crc kubenswrapper[4775]: I1125 19:53:19.055409 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2df0ca5f-5a5f-4186-8d30-66d7dabaa31c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:19 crc kubenswrapper[4775]: I1125 19:53:19.069805 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2df0ca5f-5a5f-4186-8d30-66d7dabaa31c-kube-api-access-r8cpt" (OuterVolumeSpecName: "kube-api-access-r8cpt") pod "2df0ca5f-5a5f-4186-8d30-66d7dabaa31c" (UID: "2df0ca5f-5a5f-4186-8d30-66d7dabaa31c"). InnerVolumeSpecName "kube-api-access-r8cpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:53:19 crc kubenswrapper[4775]: I1125 19:53:19.157332 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8cpt\" (UniqueName: \"kubernetes.io/projected/2df0ca5f-5a5f-4186-8d30-66d7dabaa31c-kube-api-access-r8cpt\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:19 crc kubenswrapper[4775]: I1125 19:53:19.454413 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d130916-7d97-4a78-8db5-29b19b4b896a","Type":"ContainerStarted","Data":"84c6f9961857323a7db95312ba406447490ccf9baac51c0fbb8f87c1ba42440c"} Nov 25 19:53:19 crc kubenswrapper[4775]: I1125 19:53:19.458724 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m7qwp" Nov 25 19:53:19 crc kubenswrapper[4775]: I1125 19:53:19.460717 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-m7qwp" event={"ID":"2df0ca5f-5a5f-4186-8d30-66d7dabaa31c","Type":"ContainerDied","Data":"640498d08baff6b362534cfb8deeb1787f2f95515d6daba83b457ce396aad900"} Nov 25 19:53:19 crc kubenswrapper[4775]: I1125 19:53:19.460780 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="640498d08baff6b362534cfb8deeb1787f2f95515d6daba83b457ce396aad900" Nov 25 19:53:19 crc kubenswrapper[4775]: I1125 19:53:19.936366 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zrsbv" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.085473 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z27r\" (UniqueName: \"kubernetes.io/projected/9d1bc0cf-f7e6-4621-9098-b78e16900a73-kube-api-access-8z27r\") pod \"9d1bc0cf-f7e6-4621-9098-b78e16900a73\" (UID: \"9d1bc0cf-f7e6-4621-9098-b78e16900a73\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.085516 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d1bc0cf-f7e6-4621-9098-b78e16900a73-operator-scripts\") pod \"9d1bc0cf-f7e6-4621-9098-b78e16900a73\" (UID: \"9d1bc0cf-f7e6-4621-9098-b78e16900a73\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.086686 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d1bc0cf-f7e6-4621-9098-b78e16900a73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d1bc0cf-f7e6-4621-9098-b78e16900a73" (UID: "9d1bc0cf-f7e6-4621-9098-b78e16900a73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.091271 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d1bc0cf-f7e6-4621-9098-b78e16900a73-kube-api-access-8z27r" (OuterVolumeSpecName: "kube-api-access-8z27r") pod "9d1bc0cf-f7e6-4621-9098-b78e16900a73" (UID: "9d1bc0cf-f7e6-4621-9098-b78e16900a73"). InnerVolumeSpecName "kube-api-access-8z27r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.187827 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z27r\" (UniqueName: \"kubernetes.io/projected/9d1bc0cf-f7e6-4621-9098-b78e16900a73-kube-api-access-8z27r\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.187855 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d1bc0cf-f7e6-4621-9098-b78e16900a73-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.223506 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-025d-account-create-update-g9cmz" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.229148 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6a37-account-create-update-2pzhq" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.234835 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-76p9s" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.244152 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3eb0-account-create-update-m25ts" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.294198 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvr9v\" (UniqueName: \"kubernetes.io/projected/95839f5f-fa72-41a6-88aa-bfd01a5c6571-kube-api-access-cvr9v\") pod \"95839f5f-fa72-41a6-88aa-bfd01a5c6571\" (UID: \"95839f5f-fa72-41a6-88aa-bfd01a5c6571\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.294270 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95839f5f-fa72-41a6-88aa-bfd01a5c6571-operator-scripts\") pod \"95839f5f-fa72-41a6-88aa-bfd01a5c6571\" (UID: \"95839f5f-fa72-41a6-88aa-bfd01a5c6571\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.294333 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/731e8277-1449-4718-af2c-ded02ecfe4a9-operator-scripts\") pod \"731e8277-1449-4718-af2c-ded02ecfe4a9\" (UID: \"731e8277-1449-4718-af2c-ded02ecfe4a9\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.294354 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ccf429-6851-44f1-8370-b75877bbaa53-operator-scripts\") pod \"48ccf429-6851-44f1-8370-b75877bbaa53\" (UID: \"48ccf429-6851-44f1-8370-b75877bbaa53\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.294483 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcsml\" (UniqueName: \"kubernetes.io/projected/731e8277-1449-4718-af2c-ded02ecfe4a9-kube-api-access-pcsml\") pod \"731e8277-1449-4718-af2c-ded02ecfe4a9\" (UID: \"731e8277-1449-4718-af2c-ded02ecfe4a9\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.294539 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9f5p\" (UniqueName: \"kubernetes.io/projected/48ccf429-6851-44f1-8370-b75877bbaa53-kube-api-access-f9f5p\") pod \"48ccf429-6851-44f1-8370-b75877bbaa53\" (UID: \"48ccf429-6851-44f1-8370-b75877bbaa53\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.296802 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95839f5f-fa72-41a6-88aa-bfd01a5c6571-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95839f5f-fa72-41a6-88aa-bfd01a5c6571" (UID: "95839f5f-fa72-41a6-88aa-bfd01a5c6571"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.297767 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ccf429-6851-44f1-8370-b75877bbaa53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48ccf429-6851-44f1-8370-b75877bbaa53" (UID: "48ccf429-6851-44f1-8370-b75877bbaa53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.298131 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/731e8277-1449-4718-af2c-ded02ecfe4a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "731e8277-1449-4718-af2c-ded02ecfe4a9" (UID: "731e8277-1449-4718-af2c-ded02ecfe4a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.300202 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95839f5f-fa72-41a6-88aa-bfd01a5c6571-kube-api-access-cvr9v" (OuterVolumeSpecName: "kube-api-access-cvr9v") pod "95839f5f-fa72-41a6-88aa-bfd01a5c6571" (UID: "95839f5f-fa72-41a6-88aa-bfd01a5c6571"). InnerVolumeSpecName "kube-api-access-cvr9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.300297 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/731e8277-1449-4718-af2c-ded02ecfe4a9-kube-api-access-pcsml" (OuterVolumeSpecName: "kube-api-access-pcsml") pod "731e8277-1449-4718-af2c-ded02ecfe4a9" (UID: "731e8277-1449-4718-af2c-ded02ecfe4a9"). InnerVolumeSpecName "kube-api-access-pcsml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.305248 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ccf429-6851-44f1-8370-b75877bbaa53-kube-api-access-f9f5p" (OuterVolumeSpecName: "kube-api-access-f9f5p") pod "48ccf429-6851-44f1-8370-b75877bbaa53" (UID: "48ccf429-6851-44f1-8370-b75877bbaa53"). InnerVolumeSpecName "kube-api-access-f9f5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.371862 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.400917 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-252vm\" (UniqueName: \"kubernetes.io/projected/0504e23b-4c14-4256-b888-76415dbc9ae6-kube-api-access-252vm\") pod \"0504e23b-4c14-4256-b888-76415dbc9ae6\" (UID: \"0504e23b-4c14-4256-b888-76415dbc9ae6\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.401425 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504e23b-4c14-4256-b888-76415dbc9ae6-operator-scripts\") pod \"0504e23b-4c14-4256-b888-76415dbc9ae6\" (UID: \"0504e23b-4c14-4256-b888-76415dbc9ae6\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.401856 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95839f5f-fa72-41a6-88aa-bfd01a5c6571-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.401877 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/731e8277-1449-4718-af2c-ded02ecfe4a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.401890 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ccf429-6851-44f1-8370-b75877bbaa53-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.401902 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcsml\" (UniqueName: \"kubernetes.io/projected/731e8277-1449-4718-af2c-ded02ecfe4a9-kube-api-access-pcsml\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.401916 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9f5p\" (UniqueName: \"kubernetes.io/projected/48ccf429-6851-44f1-8370-b75877bbaa53-kube-api-access-f9f5p\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.401927 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvr9v\" (UniqueName: \"kubernetes.io/projected/95839f5f-fa72-41a6-88aa-bfd01a5c6571-kube-api-access-cvr9v\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.402335 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0504e23b-4c14-4256-b888-76415dbc9ae6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0504e23b-4c14-4256-b888-76415dbc9ae6" (UID: "0504e23b-4c14-4256-b888-76415dbc9ae6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.414223 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0504e23b-4c14-4256-b888-76415dbc9ae6-kube-api-access-252vm" (OuterVolumeSpecName: "kube-api-access-252vm") pod "0504e23b-4c14-4256-b888-76415dbc9ae6" (UID: "0504e23b-4c14-4256-b888-76415dbc9ae6"). InnerVolumeSpecName "kube-api-access-252vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.476384 4775 generic.go:334] "Generic (PLEG): container finished" podID="bc956353-4430-4219-b077-5fe86ba366aa" containerID="aa3cc2cb9ba62baf7300c92a06148d29e9a6c4350753a369b9438ce88de64a33" exitCode=137 Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.476437 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bc956353-4430-4219-b077-5fe86ba366aa","Type":"ContainerDied","Data":"aa3cc2cb9ba62baf7300c92a06148d29e9a6c4350753a369b9438ce88de64a33"} Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.476464 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bc956353-4430-4219-b077-5fe86ba366aa","Type":"ContainerDied","Data":"af51d07fc67f5297b82e3b4e17ac16e7a0d73931a6cdca3fdccfa045664d7ed5"} Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.476479 4775 scope.go:117] "RemoveContainer" containerID="aa3cc2cb9ba62baf7300c92a06148d29e9a6c4350753a369b9438ce88de64a33" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.476598 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.482778 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6a37-account-create-update-2pzhq" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.482839 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6a37-account-create-update-2pzhq" event={"ID":"731e8277-1449-4718-af2c-ded02ecfe4a9","Type":"ContainerDied","Data":"8a78830782065434b93cacec85067eccbe6970713d72d39624e9aee4d3f0f4b2"} Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.483161 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a78830782065434b93cacec85067eccbe6970713d72d39624e9aee4d3f0f4b2" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.491056 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-76p9s" event={"ID":"48ccf429-6851-44f1-8370-b75877bbaa53","Type":"ContainerDied","Data":"164ae74f545a3b25deef8d479bb187a5d3d8eb24068edae03b5003d9f224c6f7"} Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.491096 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="164ae74f545a3b25deef8d479bb187a5d3d8eb24068edae03b5003d9f224c6f7" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.491146 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-76p9s" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.503600 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc956353-4430-4219-b077-5fe86ba366aa-logs\") pod \"bc956353-4430-4219-b077-5fe86ba366aa\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.503692 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-combined-ca-bundle\") pod \"bc956353-4430-4219-b077-5fe86ba366aa\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.504673 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc956353-4430-4219-b077-5fe86ba366aa-logs" (OuterVolumeSpecName: "logs") pod "bc956353-4430-4219-b077-5fe86ba366aa" (UID: "bc956353-4430-4219-b077-5fe86ba366aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.504745 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh8tz\" (UniqueName: \"kubernetes.io/projected/bc956353-4430-4219-b077-5fe86ba366aa-kube-api-access-lh8tz\") pod \"bc956353-4430-4219-b077-5fe86ba366aa\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.508386 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-config-data-custom\") pod \"bc956353-4430-4219-b077-5fe86ba366aa\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.508413 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-scripts\") pod \"bc956353-4430-4219-b077-5fe86ba366aa\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.508442 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-config-data\") pod \"bc956353-4430-4219-b077-5fe86ba366aa\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.508517 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc956353-4430-4219-b077-5fe86ba366aa-etc-machine-id\") pod \"bc956353-4430-4219-b077-5fe86ba366aa\" (UID: \"bc956353-4430-4219-b077-5fe86ba366aa\") " Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.510393 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc956353-4430-4219-b077-5fe86ba366aa-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bc956353-4430-4219-b077-5fe86ba366aa" (UID: "bc956353-4430-4219-b077-5fe86ba366aa"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.520073 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504e23b-4c14-4256-b888-76415dbc9ae6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.520103 4775 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc956353-4430-4219-b077-5fe86ba366aa-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.520113 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc956353-4430-4219-b077-5fe86ba366aa-logs\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.520122 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-252vm\" (UniqueName: \"kubernetes.io/projected/0504e23b-4c14-4256-b888-76415dbc9ae6-kube-api-access-252vm\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.523686 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bc956353-4430-4219-b077-5fe86ba366aa" (UID: "bc956353-4430-4219-b077-5fe86ba366aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.523741 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-scripts" (OuterVolumeSpecName: "scripts") pod "bc956353-4430-4219-b077-5fe86ba366aa" (UID: "bc956353-4430-4219-b077-5fe86ba366aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.523784 4775 scope.go:117] "RemoveContainer" containerID="fc4c91b583bcec9869284700d28ba57b39e4296a6370ff0a1494e10d57030d48" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.533095 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d130916-7d97-4a78-8db5-29b19b4b896a","Type":"ContainerStarted","Data":"3ccaafe2c8e886e0c57c876d54879fe300bcbb9298b0715feccdb0f2807e3ba5"} Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.536601 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3eb0-account-create-update-m25ts" event={"ID":"0504e23b-4c14-4256-b888-76415dbc9ae6","Type":"ContainerDied","Data":"0767daab84361c25247857cff79392cf6def152aaaf1c35d88272c56a93c9ad7"} Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.536627 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0767daab84361c25247857cff79392cf6def152aaaf1c35d88272c56a93c9ad7" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.536692 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3eb0-account-create-update-m25ts" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.536796 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc956353-4430-4219-b077-5fe86ba366aa-kube-api-access-lh8tz" (OuterVolumeSpecName: "kube-api-access-lh8tz") pod "bc956353-4430-4219-b077-5fe86ba366aa" (UID: "bc956353-4430-4219-b077-5fe86ba366aa"). InnerVolumeSpecName "kube-api-access-lh8tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.559174 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-025d-account-create-update-g9cmz" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.559250 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-025d-account-create-update-g9cmz" event={"ID":"95839f5f-fa72-41a6-88aa-bfd01a5c6571","Type":"ContainerDied","Data":"019f1c7bced0319032f9342a2bf1ae65587fdc0e412b7be461605cbaf0dc29a4"} Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.559284 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="019f1c7bced0319032f9342a2bf1ae65587fdc0e412b7be461605cbaf0dc29a4" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.562081 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zrsbv" event={"ID":"9d1bc0cf-f7e6-4621-9098-b78e16900a73","Type":"ContainerDied","Data":"95b75b83f356ba260024b8849cdedc2f599dbe7f809509017972cae5e1dad99c"} Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.562106 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95b75b83f356ba260024b8849cdedc2f599dbe7f809509017972cae5e1dad99c" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.562154 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zrsbv" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.574783 4775 scope.go:117] "RemoveContainer" containerID="aa3cc2cb9ba62baf7300c92a06148d29e9a6c4350753a369b9438ce88de64a33" Nov 25 19:53:20 crc kubenswrapper[4775]: E1125 19:53:20.575213 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa3cc2cb9ba62baf7300c92a06148d29e9a6c4350753a369b9438ce88de64a33\": container with ID starting with aa3cc2cb9ba62baf7300c92a06148d29e9a6c4350753a369b9438ce88de64a33 not found: ID does not exist" containerID="aa3cc2cb9ba62baf7300c92a06148d29e9a6c4350753a369b9438ce88de64a33" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.575245 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa3cc2cb9ba62baf7300c92a06148d29e9a6c4350753a369b9438ce88de64a33"} err="failed to get container status \"aa3cc2cb9ba62baf7300c92a06148d29e9a6c4350753a369b9438ce88de64a33\": rpc error: code = NotFound desc = could not find container \"aa3cc2cb9ba62baf7300c92a06148d29e9a6c4350753a369b9438ce88de64a33\": container with ID starting with aa3cc2cb9ba62baf7300c92a06148d29e9a6c4350753a369b9438ce88de64a33 not found: ID does not exist" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.575264 4775 scope.go:117] "RemoveContainer" containerID="fc4c91b583bcec9869284700d28ba57b39e4296a6370ff0a1494e10d57030d48" Nov 25 19:53:20 crc kubenswrapper[4775]: E1125 19:53:20.575568 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4c91b583bcec9869284700d28ba57b39e4296a6370ff0a1494e10d57030d48\": container with ID starting with fc4c91b583bcec9869284700d28ba57b39e4296a6370ff0a1494e10d57030d48 not found: ID does not exist" containerID="fc4c91b583bcec9869284700d28ba57b39e4296a6370ff0a1494e10d57030d48" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.575592 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4c91b583bcec9869284700d28ba57b39e4296a6370ff0a1494e10d57030d48"} err="failed to get container status \"fc4c91b583bcec9869284700d28ba57b39e4296a6370ff0a1494e10d57030d48\": rpc error: code = NotFound desc = could not find container \"fc4c91b583bcec9869284700d28ba57b39e4296a6370ff0a1494e10d57030d48\": container with ID starting with fc4c91b583bcec9869284700d28ba57b39e4296a6370ff0a1494e10d57030d48 not found: ID does not exist" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.575906 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc956353-4430-4219-b077-5fe86ba366aa" (UID: "bc956353-4430-4219-b077-5fe86ba366aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.613529 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-config-data" (OuterVolumeSpecName: "config-data") pod "bc956353-4430-4219-b077-5fe86ba366aa" (UID: "bc956353-4430-4219-b077-5fe86ba366aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.621799 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.621831 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh8tz\" (UniqueName: \"kubernetes.io/projected/bc956353-4430-4219-b077-5fe86ba366aa-kube-api-access-lh8tz\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.621840 4775 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.621848 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.621857 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc956353-4430-4219-b077-5fe86ba366aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.816104 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.823009 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.844482 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 19:53:20 crc kubenswrapper[4775]: E1125 19:53:20.845064 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ccf429-6851-44f1-8370-b75877bbaa53" containerName="mariadb-database-create" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845093 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ccf429-6851-44f1-8370-b75877bbaa53" containerName="mariadb-database-create" Nov 25 19:53:20 crc kubenswrapper[4775]: E1125 19:53:20.845118 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95839f5f-fa72-41a6-88aa-bfd01a5c6571" containerName="mariadb-account-create-update" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845131 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="95839f5f-fa72-41a6-88aa-bfd01a5c6571" containerName="mariadb-account-create-update" Nov 25 19:53:20 crc kubenswrapper[4775]: E1125 19:53:20.845164 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc956353-4430-4219-b077-5fe86ba366aa" containerName="cinder-api-log" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845178 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc956353-4430-4219-b077-5fe86ba366aa" containerName="cinder-api-log" Nov 25 19:53:20 crc kubenswrapper[4775]: E1125 19:53:20.845205 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc956353-4430-4219-b077-5fe86ba366aa" containerName="cinder-api" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845219 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc956353-4430-4219-b077-5fe86ba366aa" containerName="cinder-api" Nov 25 19:53:20 crc kubenswrapper[4775]: E1125 19:53:20.845238 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df0ca5f-5a5f-4186-8d30-66d7dabaa31c" containerName="mariadb-database-create" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845252 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df0ca5f-5a5f-4186-8d30-66d7dabaa31c" containerName="mariadb-database-create" Nov 25 19:53:20 crc kubenswrapper[4775]: E1125 19:53:20.845273 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1bc0cf-f7e6-4621-9098-b78e16900a73" containerName="mariadb-database-create" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845286 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1bc0cf-f7e6-4621-9098-b78e16900a73" containerName="mariadb-database-create" Nov 25 19:53:20 crc kubenswrapper[4775]: E1125 19:53:20.845331 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0504e23b-4c14-4256-b888-76415dbc9ae6" containerName="mariadb-account-create-update" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845346 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0504e23b-4c14-4256-b888-76415dbc9ae6" containerName="mariadb-account-create-update" Nov 25 19:53:20 crc kubenswrapper[4775]: E1125 19:53:20.845365 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731e8277-1449-4718-af2c-ded02ecfe4a9" containerName="mariadb-account-create-update" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845378 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="731e8277-1449-4718-af2c-ded02ecfe4a9" containerName="mariadb-account-create-update" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845710 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0504e23b-4c14-4256-b888-76415dbc9ae6" containerName="mariadb-account-create-update" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845749 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2df0ca5f-5a5f-4186-8d30-66d7dabaa31c" containerName="mariadb-database-create" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845773 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ccf429-6851-44f1-8370-b75877bbaa53" containerName="mariadb-database-create" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845820 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="95839f5f-fa72-41a6-88aa-bfd01a5c6571" containerName="mariadb-account-create-update" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845850 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="731e8277-1449-4718-af2c-ded02ecfe4a9" containerName="mariadb-account-create-update" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845869 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc956353-4430-4219-b077-5fe86ba366aa" containerName="cinder-api" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845886 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d1bc0cf-f7e6-4621-9098-b78e16900a73" containerName="mariadb-database-create" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.845910 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc956353-4430-4219-b077-5fe86ba366aa" containerName="cinder-api-log" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.847509 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.849763 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.849840 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.850181 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.860198 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc956353-4430-4219-b077-5fe86ba366aa" path="/var/lib/kubelet/pods/bc956353-4430-4219-b077-5fe86ba366aa/volumes" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.860822 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.925440 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.925535 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.925777 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8b59\" (UniqueName: \"kubernetes.io/projected/67cc10a4-ce8a-4820-ab5d-747e28306ee9-kube-api-access-r8b59\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.925842 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-scripts\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.925962 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-config-data-custom\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.926025 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.926106 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-config-data\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.926177 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67cc10a4-ce8a-4820-ab5d-747e28306ee9-logs\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:20 crc kubenswrapper[4775]: I1125 19:53:20.926251 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67cc10a4-ce8a-4820-ab5d-747e28306ee9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.028023 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8b59\" (UniqueName: \"kubernetes.io/projected/67cc10a4-ce8a-4820-ab5d-747e28306ee9-kube-api-access-r8b59\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.028092 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-scripts\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.028138 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-config-data-custom\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.028156 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.028194 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-config-data\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.028227 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67cc10a4-ce8a-4820-ab5d-747e28306ee9-logs\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.028278 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67cc10a4-ce8a-4820-ab5d-747e28306ee9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.028297 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.028311 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.029840 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67cc10a4-ce8a-4820-ab5d-747e28306ee9-logs\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.031137 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67cc10a4-ce8a-4820-ab5d-747e28306ee9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.032144 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.032222 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.033559 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-config-data\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.034629 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-scripts\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.034899 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.037323 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67cc10a4-ce8a-4820-ab5d-747e28306ee9-config-data-custom\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.049670 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8b59\" (UniqueName: \"kubernetes.io/projected/67cc10a4-ce8a-4820-ab5d-747e28306ee9-kube-api-access-r8b59\") pod \"cinder-api-0\" (UID: \"67cc10a4-ce8a-4820-ab5d-747e28306ee9\") " pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.250826 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.605291 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d130916-7d97-4a78-8db5-29b19b4b896a","Type":"ContainerStarted","Data":"8678f3719b90d1b823590fc8a49e33106f25525f644a7bfa8bb96d2caaef413f"} Nov 25 19:53:21 crc kubenswrapper[4775]: I1125 19:53:21.704462 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 19:53:21 crc kubenswrapper[4775]: W1125 19:53:21.707178 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67cc10a4_ce8a_4820_ab5d_747e28306ee9.slice/crio-c5cdc82d58a53d601a94883c429112a7cd7a9adab965bfe693fe175772087cb8 WatchSource:0}: Error finding container c5cdc82d58a53d601a94883c429112a7cd7a9adab965bfe693fe175772087cb8: Status 404 returned error can't find the container with id c5cdc82d58a53d601a94883c429112a7cd7a9adab965bfe693fe175772087cb8 Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.459538 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gfhh4"] Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.462143 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.464544 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mx6zm" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.465126 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.466636 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gfhh4"] Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.468631 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.559168 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-config-data\") pod \"nova-cell0-conductor-db-sync-gfhh4\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.559313 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gfhh4\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.559697 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvll5\" (UniqueName: \"kubernetes.io/projected/a3f2e969-6ce1-4720-a952-904324c3795c-kube-api-access-jvll5\") pod \"nova-cell0-conductor-db-sync-gfhh4\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.559861 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-scripts\") pod \"nova-cell0-conductor-db-sync-gfhh4\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.619015 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d130916-7d97-4a78-8db5-29b19b4b896a","Type":"ContainerStarted","Data":"6cb933e2c5f8031f652eb8c1451988c6e84558f6e28185952acd9a0b791803b3"} Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.619241 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="ceilometer-central-agent" containerID="cri-o://84c6f9961857323a7db95312ba406447490ccf9baac51c0fbb8f87c1ba42440c" gracePeriod=30 Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.619567 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.619921 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="proxy-httpd" containerID="cri-o://6cb933e2c5f8031f652eb8c1451988c6e84558f6e28185952acd9a0b791803b3" gracePeriod=30 Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.619981 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="sg-core" containerID="cri-o://8678f3719b90d1b823590fc8a49e33106f25525f644a7bfa8bb96d2caaef413f" gracePeriod=30 Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.620026 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="ceilometer-notification-agent" containerID="cri-o://3ccaafe2c8e886e0c57c876d54879fe300bcbb9298b0715feccdb0f2807e3ba5" gracePeriod=30 Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.625328 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"67cc10a4-ce8a-4820-ab5d-747e28306ee9","Type":"ContainerStarted","Data":"595b848a421fec4b7e2673a39973b5f08e43f369fe845897a73ad44b1f8296ab"} Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.625376 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"67cc10a4-ce8a-4820-ab5d-747e28306ee9","Type":"ContainerStarted","Data":"c5cdc82d58a53d601a94883c429112a7cd7a9adab965bfe693fe175772087cb8"} Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.639497 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.150304029 podStartE2EDuration="5.639462609s" podCreationTimestamp="2025-11-25 19:53:17 +0000 UTC" firstStartedPulling="2025-11-25 19:53:18.3132541 +0000 UTC m=+1180.229616456" lastFinishedPulling="2025-11-25 19:53:21.80241267 +0000 UTC m=+1183.718775036" observedRunningTime="2025-11-25 19:53:22.637152176 +0000 UTC m=+1184.553514542" watchObservedRunningTime="2025-11-25 19:53:22.639462609 +0000 UTC m=+1184.555824975" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.661199 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvll5\" (UniqueName: \"kubernetes.io/projected/a3f2e969-6ce1-4720-a952-904324c3795c-kube-api-access-jvll5\") pod \"nova-cell0-conductor-db-sync-gfhh4\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.661301 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-scripts\") pod \"nova-cell0-conductor-db-sync-gfhh4\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.661353 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-config-data\") pod \"nova-cell0-conductor-db-sync-gfhh4\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.661394 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gfhh4\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.668867 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-config-data\") pod \"nova-cell0-conductor-db-sync-gfhh4\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.669024 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gfhh4\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.669539 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-scripts\") pod \"nova-cell0-conductor-db-sync-gfhh4\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.677086 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvll5\" (UniqueName: \"kubernetes.io/projected/a3f2e969-6ce1-4720-a952-904324c3795c-kube-api-access-jvll5\") pod \"nova-cell0-conductor-db-sync-gfhh4\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:22 crc kubenswrapper[4775]: I1125 19:53:22.807024 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.294980 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gfhh4"] Nov 25 19:53:23 crc kubenswrapper[4775]: W1125 19:53:23.307986 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3f2e969_6ce1_4720_a952_904324c3795c.slice/crio-c6c6a9555cc8b594f08fda8c45d2b692d129ae1372933bf43c139946e41e5306 WatchSource:0}: Error finding container c6c6a9555cc8b594f08fda8c45d2b692d129ae1372933bf43c139946e41e5306: Status 404 returned error can't find the container with id c6c6a9555cc8b594f08fda8c45d2b692d129ae1372933bf43c139946e41e5306 Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.635548 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gfhh4" event={"ID":"a3f2e969-6ce1-4720-a952-904324c3795c","Type":"ContainerStarted","Data":"c6c6a9555cc8b594f08fda8c45d2b692d129ae1372933bf43c139946e41e5306"} Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.638510 4775 generic.go:334] "Generic (PLEG): container finished" podID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerID="6cb933e2c5f8031f652eb8c1451988c6e84558f6e28185952acd9a0b791803b3" exitCode=0 Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.638539 4775 generic.go:334] "Generic (PLEG): container finished" podID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerID="8678f3719b90d1b823590fc8a49e33106f25525f644a7bfa8bb96d2caaef413f" exitCode=2 Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.638546 4775 generic.go:334] "Generic (PLEG): container finished" podID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerID="3ccaafe2c8e886e0c57c876d54879fe300bcbb9298b0715feccdb0f2807e3ba5" exitCode=0 Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.638553 4775 generic.go:334] "Generic (PLEG): container finished" podID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerID="84c6f9961857323a7db95312ba406447490ccf9baac51c0fbb8f87c1ba42440c" exitCode=0 Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.638580 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d130916-7d97-4a78-8db5-29b19b4b896a","Type":"ContainerDied","Data":"6cb933e2c5f8031f652eb8c1451988c6e84558f6e28185952acd9a0b791803b3"} Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.638617 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d130916-7d97-4a78-8db5-29b19b4b896a","Type":"ContainerDied","Data":"8678f3719b90d1b823590fc8a49e33106f25525f644a7bfa8bb96d2caaef413f"} Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.638673 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d130916-7d97-4a78-8db5-29b19b4b896a","Type":"ContainerDied","Data":"3ccaafe2c8e886e0c57c876d54879fe300bcbb9298b0715feccdb0f2807e3ba5"} Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.638689 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d130916-7d97-4a78-8db5-29b19b4b896a","Type":"ContainerDied","Data":"84c6f9961857323a7db95312ba406447490ccf9baac51c0fbb8f87c1ba42440c"} Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.641228 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"67cc10a4-ce8a-4820-ab5d-747e28306ee9","Type":"ContainerStarted","Data":"b8c0f074dcacbf11a6f55732657a1592df02e6a173c37f4fc03c07b3c7d55a0e"} Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.642316 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.667414 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.667398513 podStartE2EDuration="3.667398513s" podCreationTimestamp="2025-11-25 19:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:53:23.662134231 +0000 UTC m=+1185.578496597" watchObservedRunningTime="2025-11-25 19:53:23.667398513 +0000 UTC m=+1185.583760869" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.674358 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.778174 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-scripts\") pod \"2d130916-7d97-4a78-8db5-29b19b4b896a\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.778325 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtkrr\" (UniqueName: \"kubernetes.io/projected/2d130916-7d97-4a78-8db5-29b19b4b896a-kube-api-access-jtkrr\") pod \"2d130916-7d97-4a78-8db5-29b19b4b896a\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.778369 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d130916-7d97-4a78-8db5-29b19b4b896a-run-httpd\") pod \"2d130916-7d97-4a78-8db5-29b19b4b896a\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.778398 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d130916-7d97-4a78-8db5-29b19b4b896a-log-httpd\") pod \"2d130916-7d97-4a78-8db5-29b19b4b896a\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.778457 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-sg-core-conf-yaml\") pod \"2d130916-7d97-4a78-8db5-29b19b4b896a\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.778528 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-config-data\") pod \"2d130916-7d97-4a78-8db5-29b19b4b896a\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.778599 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-combined-ca-bundle\") pod \"2d130916-7d97-4a78-8db5-29b19b4b896a\" (UID: \"2d130916-7d97-4a78-8db5-29b19b4b896a\") " Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.778952 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d130916-7d97-4a78-8db5-29b19b4b896a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d130916-7d97-4a78-8db5-29b19b4b896a" (UID: "2d130916-7d97-4a78-8db5-29b19b4b896a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.779174 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d130916-7d97-4a78-8db5-29b19b4b896a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d130916-7d97-4a78-8db5-29b19b4b896a" (UID: "2d130916-7d97-4a78-8db5-29b19b4b896a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.781367 4775 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d130916-7d97-4a78-8db5-29b19b4b896a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.781418 4775 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d130916-7d97-4a78-8db5-29b19b4b896a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.783389 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d130916-7d97-4a78-8db5-29b19b4b896a-kube-api-access-jtkrr" (OuterVolumeSpecName: "kube-api-access-jtkrr") pod "2d130916-7d97-4a78-8db5-29b19b4b896a" (UID: "2d130916-7d97-4a78-8db5-29b19b4b896a"). InnerVolumeSpecName "kube-api-access-jtkrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.783624 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-scripts" (OuterVolumeSpecName: "scripts") pod "2d130916-7d97-4a78-8db5-29b19b4b896a" (UID: "2d130916-7d97-4a78-8db5-29b19b4b896a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.821695 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d130916-7d97-4a78-8db5-29b19b4b896a" (UID: "2d130916-7d97-4a78-8db5-29b19b4b896a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.868318 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d130916-7d97-4a78-8db5-29b19b4b896a" (UID: "2d130916-7d97-4a78-8db5-29b19b4b896a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.883154 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtkrr\" (UniqueName: \"kubernetes.io/projected/2d130916-7d97-4a78-8db5-29b19b4b896a-kube-api-access-jtkrr\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.883186 4775 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.883196 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.883208 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.899829 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-config-data" (OuterVolumeSpecName: "config-data") pod "2d130916-7d97-4a78-8db5-29b19b4b896a" (UID: "2d130916-7d97-4a78-8db5-29b19b4b896a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:23 crc kubenswrapper[4775]: I1125 19:53:23.985262 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d130916-7d97-4a78-8db5-29b19b4b896a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.667350 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d130916-7d97-4a78-8db5-29b19b4b896a","Type":"ContainerDied","Data":"6bbc44eefded663a901fc537cec1055134da168eb040cfebd8080982d771dcec"} Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.667404 4775 scope.go:117] "RemoveContainer" containerID="6cb933e2c5f8031f652eb8c1451988c6e84558f6e28185952acd9a0b791803b3" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.667410 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.697106 4775 scope.go:117] "RemoveContainer" containerID="8678f3719b90d1b823590fc8a49e33106f25525f644a7bfa8bb96d2caaef413f" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.701976 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.718456 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.733352 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:24 crc kubenswrapper[4775]: E1125 19:53:24.737149 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="sg-core" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.737194 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="sg-core" Nov 25 19:53:24 crc kubenswrapper[4775]: E1125 19:53:24.737226 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="ceilometer-notification-agent" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.737393 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="ceilometer-notification-agent" Nov 25 19:53:24 crc kubenswrapper[4775]: E1125 19:53:24.738523 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="proxy-httpd" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.738543 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="proxy-httpd" Nov 25 19:53:24 crc kubenswrapper[4775]: E1125 19:53:24.738593 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="ceilometer-central-agent" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.738604 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="ceilometer-central-agent" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.739145 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="sg-core" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.739174 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="ceilometer-notification-agent" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.739197 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="proxy-httpd" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.739219 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" containerName="ceilometer-central-agent" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.754263 4775 scope.go:117] "RemoveContainer" containerID="3ccaafe2c8e886e0c57c876d54879fe300bcbb9298b0715feccdb0f2807e3ba5" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.755156 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.755295 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.758280 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.759025 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.794944 4775 scope.go:117] "RemoveContainer" containerID="84c6f9961857323a7db95312ba406447490ccf9baac51c0fbb8f87c1ba42440c" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.859395 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d130916-7d97-4a78-8db5-29b19b4b896a" path="/var/lib/kubelet/pods/2d130916-7d97-4a78-8db5-29b19b4b896a/volumes" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.901910 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqbxr\" (UniqueName: \"kubernetes.io/projected/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-kube-api-access-wqbxr\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.901969 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-config-data\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.902029 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.902093 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-run-httpd\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.902189 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.902218 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-scripts\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:24 crc kubenswrapper[4775]: I1125 19:53:24.902237 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-log-httpd\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.003531 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.003570 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-scripts\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.003587 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-log-httpd\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.003708 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqbxr\" (UniqueName: \"kubernetes.io/projected/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-kube-api-access-wqbxr\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.003725 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-config-data\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.003747 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.003765 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-run-httpd\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.004199 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-run-httpd\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.007178 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-log-httpd\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.008804 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-config-data\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.012228 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-scripts\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.015062 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.017394 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.021349 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqbxr\" (UniqueName: \"kubernetes.io/projected/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-kube-api-access-wqbxr\") pod \"ceilometer-0\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.092850 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.342312 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.692394 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c43f7-f207-46ce-bfa4-7cb709a8dee2","Type":"ContainerStarted","Data":"50fc4762fd93b8ae576dbe69d711a5f7efb841099183034852d9e2da37a4d488"} Nov 25 19:53:25 crc kubenswrapper[4775]: I1125 19:53:25.877443 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:26 crc kubenswrapper[4775]: I1125 19:53:26.702308 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c43f7-f207-46ce-bfa4-7cb709a8dee2","Type":"ContainerStarted","Data":"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965"} Nov 25 19:53:31 crc kubenswrapper[4775]: I1125 19:53:31.750289 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c43f7-f207-46ce-bfa4-7cb709a8dee2","Type":"ContainerStarted","Data":"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e"} Nov 25 19:53:31 crc kubenswrapper[4775]: I1125 19:53:31.751673 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gfhh4" event={"ID":"a3f2e969-6ce1-4720-a952-904324c3795c","Type":"ContainerStarted","Data":"9804a1d98e005914775f5746f81df983be4e0b4afc5f1356cd883c8c1d5b5ceb"} Nov 25 19:53:31 crc kubenswrapper[4775]: I1125 19:53:31.769151 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-gfhh4" podStartSLOduration=1.8895475080000002 podStartE2EDuration="9.76913556s" podCreationTimestamp="2025-11-25 19:53:22 +0000 UTC" firstStartedPulling="2025-11-25 19:53:23.311399623 +0000 UTC m=+1185.227761989" lastFinishedPulling="2025-11-25 19:53:31.190987675 +0000 UTC m=+1193.107350041" observedRunningTime="2025-11-25 19:53:31.76649867 +0000 UTC m=+1193.682861036" watchObservedRunningTime="2025-11-25 19:53:31.76913556 +0000 UTC m=+1193.685497926" Nov 25 19:53:32 crc kubenswrapper[4775]: I1125 19:53:32.762354 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c43f7-f207-46ce-bfa4-7cb709a8dee2","Type":"ContainerStarted","Data":"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272"} Nov 25 19:53:33 crc kubenswrapper[4775]: I1125 19:53:33.250611 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 25 19:53:34 crc kubenswrapper[4775]: I1125 19:53:34.787915 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c43f7-f207-46ce-bfa4-7cb709a8dee2","Type":"ContainerStarted","Data":"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36"} Nov 25 19:53:34 crc kubenswrapper[4775]: I1125 19:53:34.788738 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 19:53:34 crc kubenswrapper[4775]: I1125 19:53:34.788726 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="ceilometer-notification-agent" containerID="cri-o://ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e" gracePeriod=30 Nov 25 19:53:34 crc kubenswrapper[4775]: I1125 19:53:34.788644 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="proxy-httpd" containerID="cri-o://c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36" gracePeriod=30 Nov 25 19:53:34 crc kubenswrapper[4775]: I1125 19:53:34.788610 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="sg-core" containerID="cri-o://9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272" gracePeriod=30 Nov 25 19:53:34 crc kubenswrapper[4775]: I1125 19:53:34.788116 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="ceilometer-central-agent" containerID="cri-o://2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965" gracePeriod=30 Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.626518 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.796753 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-combined-ca-bundle\") pod \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.797450 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqbxr\" (UniqueName: \"kubernetes.io/projected/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-kube-api-access-wqbxr\") pod \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.797800 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-log-httpd\") pod \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.797852 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-run-httpd\") pod \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.797871 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-scripts\") pod \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.797951 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-config-data\") pod \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.797985 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-sg-core-conf-yaml\") pod \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\" (UID: \"c47c43f7-f207-46ce-bfa4-7cb709a8dee2\") " Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.798580 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c47c43f7-f207-46ce-bfa4-7cb709a8dee2" (UID: "c47c43f7-f207-46ce-bfa4-7cb709a8dee2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.798610 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c47c43f7-f207-46ce-bfa4-7cb709a8dee2" (UID: "c47c43f7-f207-46ce-bfa4-7cb709a8dee2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.802836 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-kube-api-access-wqbxr" (OuterVolumeSpecName: "kube-api-access-wqbxr") pod "c47c43f7-f207-46ce-bfa4-7cb709a8dee2" (UID: "c47c43f7-f207-46ce-bfa4-7cb709a8dee2"). InnerVolumeSpecName "kube-api-access-wqbxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.804458 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-scripts" (OuterVolumeSpecName: "scripts") pod "c47c43f7-f207-46ce-bfa4-7cb709a8dee2" (UID: "c47c43f7-f207-46ce-bfa4-7cb709a8dee2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.806583 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.806690 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c43f7-f207-46ce-bfa4-7cb709a8dee2","Type":"ContainerDied","Data":"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36"} Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.806765 4775 scope.go:117] "RemoveContainer" containerID="c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.806453 4775 generic.go:334] "Generic (PLEG): container finished" podID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerID="c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36" exitCode=0 Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.808316 4775 generic.go:334] "Generic (PLEG): container finished" podID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerID="9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272" exitCode=2 Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.808333 4775 generic.go:334] "Generic (PLEG): container finished" podID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerID="ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e" exitCode=0 Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.808345 4775 generic.go:334] "Generic (PLEG): container finished" podID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerID="2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965" exitCode=0 Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.808369 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c43f7-f207-46ce-bfa4-7cb709a8dee2","Type":"ContainerDied","Data":"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272"} Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.808398 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c43f7-f207-46ce-bfa4-7cb709a8dee2","Type":"ContainerDied","Data":"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e"} Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.808419 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c43f7-f207-46ce-bfa4-7cb709a8dee2","Type":"ContainerDied","Data":"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965"} Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.808435 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c43f7-f207-46ce-bfa4-7cb709a8dee2","Type":"ContainerDied","Data":"50fc4762fd93b8ae576dbe69d711a5f7efb841099183034852d9e2da37a4d488"} Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.832610 4775 scope.go:117] "RemoveContainer" containerID="9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.858231 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c47c43f7-f207-46ce-bfa4-7cb709a8dee2" (UID: "c47c43f7-f207-46ce-bfa4-7cb709a8dee2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.860952 4775 scope.go:117] "RemoveContainer" containerID="ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.885122 4775 scope.go:117] "RemoveContainer" containerID="2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.895114 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c47c43f7-f207-46ce-bfa4-7cb709a8dee2" (UID: "c47c43f7-f207-46ce-bfa4-7cb709a8dee2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.904548 4775 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.904577 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.904586 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqbxr\" (UniqueName: \"kubernetes.io/projected/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-kube-api-access-wqbxr\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.904595 4775 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.904603 4775 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.904612 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:35 crc kubenswrapper[4775]: I1125 19:53:35.934020 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-config-data" (OuterVolumeSpecName: "config-data") pod "c47c43f7-f207-46ce-bfa4-7cb709a8dee2" (UID: "c47c43f7-f207-46ce-bfa4-7cb709a8dee2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.006083 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c47c43f7-f207-46ce-bfa4-7cb709a8dee2-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.027157 4775 scope.go:117] "RemoveContainer" containerID="c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36" Nov 25 19:53:36 crc kubenswrapper[4775]: E1125 19:53:36.027998 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36\": container with ID starting with c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36 not found: ID does not exist" containerID="c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.028046 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36"} err="failed to get container status \"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36\": rpc error: code = NotFound desc = could not find container \"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36\": container with ID starting with c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36 not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.028073 4775 scope.go:117] "RemoveContainer" containerID="9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272" Nov 25 19:53:36 crc kubenswrapper[4775]: E1125 19:53:36.028540 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272\": container with ID starting with 9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272 not found: ID does not exist" containerID="9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.028581 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272"} err="failed to get container status \"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272\": rpc error: code = NotFound desc = could not find container \"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272\": container with ID starting with 9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272 not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.028607 4775 scope.go:117] "RemoveContainer" containerID="ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e" Nov 25 19:53:36 crc kubenswrapper[4775]: E1125 19:53:36.029018 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e\": container with ID starting with ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e not found: ID does not exist" containerID="ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.029052 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e"} err="failed to get container status \"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e\": rpc error: code = NotFound desc = could not find container \"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e\": container with ID starting with ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.029071 4775 scope.go:117] "RemoveContainer" containerID="2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965" Nov 25 19:53:36 crc kubenswrapper[4775]: E1125 19:53:36.029353 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965\": container with ID starting with 2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965 not found: ID does not exist" containerID="2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.029378 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965"} err="failed to get container status \"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965\": rpc error: code = NotFound desc = could not find container \"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965\": container with ID starting with 2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965 not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.029393 4775 scope.go:117] "RemoveContainer" containerID="c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.029641 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36"} err="failed to get container status \"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36\": rpc error: code = NotFound desc = could not find container \"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36\": container with ID starting with c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36 not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.029679 4775 scope.go:117] "RemoveContainer" containerID="9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.030019 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272"} err="failed to get container status \"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272\": rpc error: code = NotFound desc = could not find container \"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272\": container with ID starting with 9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272 not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.030047 4775 scope.go:117] "RemoveContainer" containerID="ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.030311 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e"} err="failed to get container status \"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e\": rpc error: code = NotFound desc = could not find container \"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e\": container with ID starting with ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.030337 4775 scope.go:117] "RemoveContainer" containerID="2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.030563 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965"} err="failed to get container status \"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965\": rpc error: code = NotFound desc = could not find container \"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965\": container with ID starting with 2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965 not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.030587 4775 scope.go:117] "RemoveContainer" containerID="c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.030823 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36"} err="failed to get container status \"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36\": rpc error: code = NotFound desc = could not find container \"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36\": container with ID starting with c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36 not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.030847 4775 scope.go:117] "RemoveContainer" containerID="9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.031074 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272"} err="failed to get container status \"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272\": rpc error: code = NotFound desc = could not find container \"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272\": container with ID starting with 9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272 not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.031093 4775 scope.go:117] "RemoveContainer" containerID="ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.031328 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e"} err="failed to get container status \"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e\": rpc error: code = NotFound desc = could not find container \"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e\": container with ID starting with ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.031361 4775 scope.go:117] "RemoveContainer" containerID="2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.031568 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965"} err="failed to get container status \"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965\": rpc error: code = NotFound desc = could not find container \"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965\": container with ID starting with 2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965 not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.031585 4775 scope.go:117] "RemoveContainer" containerID="c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.031955 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36"} err="failed to get container status \"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36\": rpc error: code = NotFound desc = could not find container \"c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36\": container with ID starting with c89b20562c7324fbd65725db135a71f0f8985b21affe6e03e3a40fb5a74bbb36 not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.031990 4775 scope.go:117] "RemoveContainer" containerID="9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.032281 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272"} err="failed to get container status \"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272\": rpc error: code = NotFound desc = could not find container \"9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272\": container with ID starting with 9969022233c8242d9397cd5742ce40407453383cd2ce2b7eb219cd82e8658272 not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.032301 4775 scope.go:117] "RemoveContainer" containerID="ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.032566 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e"} err="failed to get container status \"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e\": rpc error: code = NotFound desc = could not find container \"ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e\": container with ID starting with ea1578a0316d183a999d86dfb201c6009a5397ace0b238c4644a53a3af6d0e5e not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.032589 4775 scope.go:117] "RemoveContainer" containerID="2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.033343 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965"} err="failed to get container status \"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965\": rpc error: code = NotFound desc = could not find container \"2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965\": container with ID starting with 2b98a9dbe559120382f7afa29867da1b2970aa824e8001dd78aba50a06c47965 not found: ID does not exist" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.166457 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.190774 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.201837 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:36 crc kubenswrapper[4775]: E1125 19:53:36.202318 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="proxy-httpd" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.202339 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="proxy-httpd" Nov 25 19:53:36 crc kubenswrapper[4775]: E1125 19:53:36.202352 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="sg-core" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.202361 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="sg-core" Nov 25 19:53:36 crc kubenswrapper[4775]: E1125 19:53:36.202374 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="ceilometer-central-agent" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.202382 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="ceilometer-central-agent" Nov 25 19:53:36 crc kubenswrapper[4775]: E1125 19:53:36.202401 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="ceilometer-notification-agent" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.202409 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="ceilometer-notification-agent" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.202606 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="ceilometer-central-agent" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.202628 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="ceilometer-notification-agent" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.202640 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="sg-core" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.202678 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" containerName="proxy-httpd" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.214445 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.214564 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.218966 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.219148 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.311691 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.311734 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-scripts\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.311953 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf62a13f-3710-46a8-841f-26c01081361d-run-httpd\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.312136 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnq9w\" (UniqueName: \"kubernetes.io/projected/bf62a13f-3710-46a8-841f-26c01081361d-kube-api-access-nnq9w\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.312184 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf62a13f-3710-46a8-841f-26c01081361d-log-httpd\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.312393 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-config-data\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.312561 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.414503 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-config-data\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.414580 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.414608 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.414630 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-scripts\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.414682 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf62a13f-3710-46a8-841f-26c01081361d-run-httpd\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.414721 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnq9w\" (UniqueName: \"kubernetes.io/projected/bf62a13f-3710-46a8-841f-26c01081361d-kube-api-access-nnq9w\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.414741 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf62a13f-3710-46a8-841f-26c01081361d-log-httpd\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.415172 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf62a13f-3710-46a8-841f-26c01081361d-run-httpd\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.415241 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf62a13f-3710-46a8-841f-26c01081361d-log-httpd\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.420404 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.421596 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-scripts\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.423241 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-config-data\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.428079 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.445513 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnq9w\" (UniqueName: \"kubernetes.io/projected/bf62a13f-3710-46a8-841f-26c01081361d-kube-api-access-nnq9w\") pod \"ceilometer-0\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.540573 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.861328 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c47c43f7-f207-46ce-bfa4-7cb709a8dee2" path="/var/lib/kubelet/pods/c47c43f7-f207-46ce-bfa4-7cb709a8dee2/volumes" Nov 25 19:53:36 crc kubenswrapper[4775]: I1125 19:53:36.996338 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:53:36 crc kubenswrapper[4775]: W1125 19:53:36.998221 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf62a13f_3710_46a8_841f_26c01081361d.slice/crio-b2ca78eb28d70b0ac21a19fd31dca742b9fc94ed2dd443af4725b2ae189e3eec WatchSource:0}: Error finding container b2ca78eb28d70b0ac21a19fd31dca742b9fc94ed2dd443af4725b2ae189e3eec: Status 404 returned error can't find the container with id b2ca78eb28d70b0ac21a19fd31dca742b9fc94ed2dd443af4725b2ae189e3eec Nov 25 19:53:37 crc kubenswrapper[4775]: I1125 19:53:37.857780 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf62a13f-3710-46a8-841f-26c01081361d","Type":"ContainerStarted","Data":"19f5f85b9f176b28bce660ab1cc4c3c45a45ec4a35bcaa573f249f3660f2915d"} Nov 25 19:53:37 crc kubenswrapper[4775]: I1125 19:53:37.858062 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf62a13f-3710-46a8-841f-26c01081361d","Type":"ContainerStarted","Data":"b2ca78eb28d70b0ac21a19fd31dca742b9fc94ed2dd443af4725b2ae189e3eec"} Nov 25 19:53:38 crc kubenswrapper[4775]: I1125 19:53:38.895459 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf62a13f-3710-46a8-841f-26c01081361d","Type":"ContainerStarted","Data":"53db6a4ee8a22ff48c990f900f56c37fb3e1dfcf67d9dc3298b4dbd1f64ff6c4"} Nov 25 19:53:39 crc kubenswrapper[4775]: I1125 19:53:39.912222 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf62a13f-3710-46a8-841f-26c01081361d","Type":"ContainerStarted","Data":"a2b57c20392b716a2c25b7324a36191185587e97b2a150befa00273d5925fc77"} Nov 25 19:53:40 crc kubenswrapper[4775]: I1125 19:53:40.922238 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf62a13f-3710-46a8-841f-26c01081361d","Type":"ContainerStarted","Data":"650843ed79aabc4dd7fc04e26741791d0fa0f974117f6fb2f172f5c24f0045fe"} Nov 25 19:53:40 crc kubenswrapper[4775]: I1125 19:53:40.922791 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 19:53:40 crc kubenswrapper[4775]: I1125 19:53:40.964225 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.821524466 podStartE2EDuration="4.964199911s" podCreationTimestamp="2025-11-25 19:53:36 +0000 UTC" firstStartedPulling="2025-11-25 19:53:37.000488717 +0000 UTC m=+1198.916851093" lastFinishedPulling="2025-11-25 19:53:40.143164142 +0000 UTC m=+1202.059526538" observedRunningTime="2025-11-25 19:53:40.947475152 +0000 UTC m=+1202.863837548" watchObservedRunningTime="2025-11-25 19:53:40.964199911 +0000 UTC m=+1202.880562317" Nov 25 19:53:44 crc kubenswrapper[4775]: I1125 19:53:44.973417 4775 generic.go:334] "Generic (PLEG): container finished" podID="a3f2e969-6ce1-4720-a952-904324c3795c" containerID="9804a1d98e005914775f5746f81df983be4e0b4afc5f1356cd883c8c1d5b5ceb" exitCode=0 Nov 25 19:53:44 crc kubenswrapper[4775]: I1125 19:53:44.974180 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gfhh4" event={"ID":"a3f2e969-6ce1-4720-a952-904324c3795c","Type":"ContainerDied","Data":"9804a1d98e005914775f5746f81df983be4e0b4afc5f1356cd883c8c1d5b5ceb"} Nov 25 19:53:46 crc kubenswrapper[4775]: I1125 19:53:46.438631 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:46 crc kubenswrapper[4775]: I1125 19:53:46.545907 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvll5\" (UniqueName: \"kubernetes.io/projected/a3f2e969-6ce1-4720-a952-904324c3795c-kube-api-access-jvll5\") pod \"a3f2e969-6ce1-4720-a952-904324c3795c\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " Nov 25 19:53:46 crc kubenswrapper[4775]: I1125 19:53:46.546010 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-config-data\") pod \"a3f2e969-6ce1-4720-a952-904324c3795c\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " Nov 25 19:53:46 crc kubenswrapper[4775]: I1125 19:53:46.546126 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-scripts\") pod \"a3f2e969-6ce1-4720-a952-904324c3795c\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " Nov 25 19:53:46 crc kubenswrapper[4775]: I1125 19:53:46.546161 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-combined-ca-bundle\") pod \"a3f2e969-6ce1-4720-a952-904324c3795c\" (UID: \"a3f2e969-6ce1-4720-a952-904324c3795c\") " Nov 25 19:53:46 crc kubenswrapper[4775]: I1125 19:53:46.552132 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f2e969-6ce1-4720-a952-904324c3795c-kube-api-access-jvll5" (OuterVolumeSpecName: "kube-api-access-jvll5") pod "a3f2e969-6ce1-4720-a952-904324c3795c" (UID: "a3f2e969-6ce1-4720-a952-904324c3795c"). InnerVolumeSpecName "kube-api-access-jvll5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:53:46 crc kubenswrapper[4775]: I1125 19:53:46.556562 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-scripts" (OuterVolumeSpecName: "scripts") pod "a3f2e969-6ce1-4720-a952-904324c3795c" (UID: "a3f2e969-6ce1-4720-a952-904324c3795c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:46 crc kubenswrapper[4775]: I1125 19:53:46.572220 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3f2e969-6ce1-4720-a952-904324c3795c" (UID: "a3f2e969-6ce1-4720-a952-904324c3795c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:46 crc kubenswrapper[4775]: I1125 19:53:46.577514 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-config-data" (OuterVolumeSpecName: "config-data") pod "a3f2e969-6ce1-4720-a952-904324c3795c" (UID: "a3f2e969-6ce1-4720-a952-904324c3795c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:53:46 crc kubenswrapper[4775]: I1125 19:53:46.648428 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvll5\" (UniqueName: \"kubernetes.io/projected/a3f2e969-6ce1-4720-a952-904324c3795c-kube-api-access-jvll5\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:46 crc kubenswrapper[4775]: I1125 19:53:46.648461 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:46 crc kubenswrapper[4775]: I1125 19:53:46.648472 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:46 crc kubenswrapper[4775]: I1125 19:53:46.648481 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f2e969-6ce1-4720-a952-904324c3795c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.000348 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gfhh4" event={"ID":"a3f2e969-6ce1-4720-a952-904324c3795c","Type":"ContainerDied","Data":"c6c6a9555cc8b594f08fda8c45d2b692d129ae1372933bf43c139946e41e5306"} Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.000722 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6c6a9555cc8b594f08fda8c45d2b692d129ae1372933bf43c139946e41e5306" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.000783 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gfhh4" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.120378 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 19:53:47 crc kubenswrapper[4775]: E1125 19:53:47.120772 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3f2e969-6ce1-4720-a952-904324c3795c" containerName="nova-cell0-conductor-db-sync" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.120791 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3f2e969-6ce1-4720-a952-904324c3795c" containerName="nova-cell0-conductor-db-sync" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.121021 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3f2e969-6ce1-4720-a952-904324c3795c" containerName="nova-cell0-conductor-db-sync" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.121691 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.124616 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.125112 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mx6zm" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.136890 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.260666 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c11a62e-4c1d-4451-9350-4a1d9458de6e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5c11a62e-4c1d-4451-9350-4a1d9458de6e\") " pod="openstack/nova-cell0-conductor-0" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.260790 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pckmv\" (UniqueName: \"kubernetes.io/projected/5c11a62e-4c1d-4451-9350-4a1d9458de6e-kube-api-access-pckmv\") pod \"nova-cell0-conductor-0\" (UID: \"5c11a62e-4c1d-4451-9350-4a1d9458de6e\") " pod="openstack/nova-cell0-conductor-0" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.260832 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c11a62e-4c1d-4451-9350-4a1d9458de6e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5c11a62e-4c1d-4451-9350-4a1d9458de6e\") " pod="openstack/nova-cell0-conductor-0" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.365308 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pckmv\" (UniqueName: \"kubernetes.io/projected/5c11a62e-4c1d-4451-9350-4a1d9458de6e-kube-api-access-pckmv\") pod \"nova-cell0-conductor-0\" (UID: \"5c11a62e-4c1d-4451-9350-4a1d9458de6e\") " pod="openstack/nova-cell0-conductor-0" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.365366 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c11a62e-4c1d-4451-9350-4a1d9458de6e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5c11a62e-4c1d-4451-9350-4a1d9458de6e\") " pod="openstack/nova-cell0-conductor-0" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.365405 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c11a62e-4c1d-4451-9350-4a1d9458de6e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5c11a62e-4c1d-4451-9350-4a1d9458de6e\") " pod="openstack/nova-cell0-conductor-0" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.372209 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c11a62e-4c1d-4451-9350-4a1d9458de6e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5c11a62e-4c1d-4451-9350-4a1d9458de6e\") " pod="openstack/nova-cell0-conductor-0" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.372553 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c11a62e-4c1d-4451-9350-4a1d9458de6e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5c11a62e-4c1d-4451-9350-4a1d9458de6e\") " pod="openstack/nova-cell0-conductor-0" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.384535 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pckmv\" (UniqueName: \"kubernetes.io/projected/5c11a62e-4c1d-4451-9350-4a1d9458de6e-kube-api-access-pckmv\") pod \"nova-cell0-conductor-0\" (UID: \"5c11a62e-4c1d-4451-9350-4a1d9458de6e\") " pod="openstack/nova-cell0-conductor-0" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.439492 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 19:53:47 crc kubenswrapper[4775]: I1125 19:53:47.869978 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 19:53:48 crc kubenswrapper[4775]: I1125 19:53:48.009816 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5c11a62e-4c1d-4451-9350-4a1d9458de6e","Type":"ContainerStarted","Data":"0aa7a421b40e422ba0a5d6bf48fe3c209cc79a8d825abb7677d6538fd32a700b"} Nov 25 19:53:49 crc kubenswrapper[4775]: I1125 19:53:49.021357 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5c11a62e-4c1d-4451-9350-4a1d9458de6e","Type":"ContainerStarted","Data":"e36facb0b52d3675ee5a9cc4cf0988c38b234026aa1f2d843d7a3645062f96f2"} Nov 25 19:53:49 crc kubenswrapper[4775]: I1125 19:53:49.021543 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 25 19:53:49 crc kubenswrapper[4775]: I1125 19:53:49.064409 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.064381136 podStartE2EDuration="2.064381136s" podCreationTimestamp="2025-11-25 19:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:53:49.048174011 +0000 UTC m=+1210.964536417" watchObservedRunningTime="2025-11-25 19:53:49.064381136 +0000 UTC m=+1210.980743542" Nov 25 19:53:57 crc kubenswrapper[4775]: I1125 19:53:57.486232 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.058285 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-xd9d5"] Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.059232 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.062822 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.062843 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.070496 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xd9d5"] Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.175323 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lkb4\" (UniqueName: \"kubernetes.io/projected/a74e1346-a6a0-455c-917a-fa611dc53263-kube-api-access-6lkb4\") pod \"nova-cell0-cell-mapping-xd9d5\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.175374 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-scripts\") pod \"nova-cell0-cell-mapping-xd9d5\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.175405 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-config-data\") pod \"nova-cell0-cell-mapping-xd9d5\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.175468 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xd9d5\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.219213 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.220956 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.239869 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.241326 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.277023 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xd9d5\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.277162 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lkb4\" (UniqueName: \"kubernetes.io/projected/a74e1346-a6a0-455c-917a-fa611dc53263-kube-api-access-6lkb4\") pod \"nova-cell0-cell-mapping-xd9d5\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.277182 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-scripts\") pod \"nova-cell0-cell-mapping-xd9d5\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.277199 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-config-data\") pod \"nova-cell0-cell-mapping-xd9d5\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.284278 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-config-data\") pod \"nova-cell0-cell-mapping-xd9d5\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.284595 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xd9d5\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.284943 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-scripts\") pod \"nova-cell0-cell-mapping-xd9d5\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.301742 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lkb4\" (UniqueName: \"kubernetes.io/projected/a74e1346-a6a0-455c-917a-fa611dc53263-kube-api-access-6lkb4\") pod \"nova-cell0-cell-mapping-xd9d5\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.303491 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.304676 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.308718 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.314794 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.378151 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2f763f2-c489-42ba-af44-450243c60955-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.378226 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2f763f2-c489-42ba-af44-450243c60955-logs\") pod \"nova-api-0\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.378346 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db6r7\" (UniqueName: \"kubernetes.io/projected/b2f763f2-c489-42ba-af44-450243c60955-kube-api-access-db6r7\") pod \"nova-api-0\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.378396 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2f763f2-c489-42ba-af44-450243c60955-config-data\") pod \"nova-api-0\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.385451 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.386813 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.388125 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.396542 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.403123 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.411897 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.414389 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.421400 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.461853 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.479592 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ee223f-18f4-4136-87e1-3074d439352d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7ee223f-18f4-4136-87e1-3074d439352d\") " pod="openstack/nova-scheduler-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.479702 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db6r7\" (UniqueName: \"kubernetes.io/projected/b2f763f2-c489-42ba-af44-450243c60955-kube-api-access-db6r7\") pod \"nova-api-0\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.479727 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sklbl\" (UniqueName: \"kubernetes.io/projected/3956a135-c94d-443b-9686-4b77db2e7df8-kube-api-access-sklbl\") pod \"nova-cell1-novncproxy-0\" (UID: \"3956a135-c94d-443b-9686-4b77db2e7df8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.479760 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3956a135-c94d-443b-9686-4b77db2e7df8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3956a135-c94d-443b-9686-4b77db2e7df8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.479791 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2f763f2-c489-42ba-af44-450243c60955-config-data\") pod \"nova-api-0\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.479814 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2dxt\" (UniqueName: \"kubernetes.io/projected/a7ee223f-18f4-4136-87e1-3074d439352d-kube-api-access-q2dxt\") pod \"nova-scheduler-0\" (UID: \"a7ee223f-18f4-4136-87e1-3074d439352d\") " pod="openstack/nova-scheduler-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.479850 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ee223f-18f4-4136-87e1-3074d439352d-config-data\") pod \"nova-scheduler-0\" (UID: \"a7ee223f-18f4-4136-87e1-3074d439352d\") " pod="openstack/nova-scheduler-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.479867 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2f763f2-c489-42ba-af44-450243c60955-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.479886 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3956a135-c94d-443b-9686-4b77db2e7df8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3956a135-c94d-443b-9686-4b77db2e7df8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.479902 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2f763f2-c489-42ba-af44-450243c60955-logs\") pod \"nova-api-0\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.480319 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2f763f2-c489-42ba-af44-450243c60955-logs\") pod \"nova-api-0\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.493436 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2f763f2-c489-42ba-af44-450243c60955-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.493458 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2f763f2-c489-42ba-af44-450243c60955-config-data\") pod \"nova-api-0\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.510196 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db6r7\" (UniqueName: \"kubernetes.io/projected/b2f763f2-c489-42ba-af44-450243c60955-kube-api-access-db6r7\") pod \"nova-api-0\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.543109 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.558696 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-c4ggj"] Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.560105 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.578896 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-c4ggj"] Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.591044 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sklbl\" (UniqueName: \"kubernetes.io/projected/3956a135-c94d-443b-9686-4b77db2e7df8-kube-api-access-sklbl\") pod \"nova-cell1-novncproxy-0\" (UID: \"3956a135-c94d-443b-9686-4b77db2e7df8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.591275 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c0b8ab-c36f-4322-a83d-436af58b8961-config-data\") pod \"nova-metadata-0\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.591436 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3956a135-c94d-443b-9686-4b77db2e7df8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3956a135-c94d-443b-9686-4b77db2e7df8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.591592 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2dxt\" (UniqueName: \"kubernetes.io/projected/a7ee223f-18f4-4136-87e1-3074d439352d-kube-api-access-q2dxt\") pod \"nova-scheduler-0\" (UID: \"a7ee223f-18f4-4136-87e1-3074d439352d\") " pod="openstack/nova-scheduler-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.596702 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt7jt\" (UniqueName: \"kubernetes.io/projected/a8c0b8ab-c36f-4322-a83d-436af58b8961-kube-api-access-wt7jt\") pod \"nova-metadata-0\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.596867 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ee223f-18f4-4136-87e1-3074d439352d-config-data\") pod \"nova-scheduler-0\" (UID: \"a7ee223f-18f4-4136-87e1-3074d439352d\") " pod="openstack/nova-scheduler-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.596977 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3956a135-c94d-443b-9686-4b77db2e7df8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3956a135-c94d-443b-9686-4b77db2e7df8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.597146 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c0b8ab-c36f-4322-a83d-436af58b8961-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.597277 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8c0b8ab-c36f-4322-a83d-436af58b8961-logs\") pod \"nova-metadata-0\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.597346 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ee223f-18f4-4136-87e1-3074d439352d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7ee223f-18f4-4136-87e1-3074d439352d\") " pod="openstack/nova-scheduler-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.602684 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ee223f-18f4-4136-87e1-3074d439352d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7ee223f-18f4-4136-87e1-3074d439352d\") " pod="openstack/nova-scheduler-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.605822 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ee223f-18f4-4136-87e1-3074d439352d-config-data\") pod \"nova-scheduler-0\" (UID: \"a7ee223f-18f4-4136-87e1-3074d439352d\") " pod="openstack/nova-scheduler-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.609212 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3956a135-c94d-443b-9686-4b77db2e7df8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3956a135-c94d-443b-9686-4b77db2e7df8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.623376 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3956a135-c94d-443b-9686-4b77db2e7df8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3956a135-c94d-443b-9686-4b77db2e7df8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.631775 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2dxt\" (UniqueName: \"kubernetes.io/projected/a7ee223f-18f4-4136-87e1-3074d439352d-kube-api-access-q2dxt\") pod \"nova-scheduler-0\" (UID: \"a7ee223f-18f4-4136-87e1-3074d439352d\") " pod="openstack/nova-scheduler-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.632431 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sklbl\" (UniqueName: \"kubernetes.io/projected/3956a135-c94d-443b-9686-4b77db2e7df8-kube-api-access-sklbl\") pod \"nova-cell1-novncproxy-0\" (UID: \"3956a135-c94d-443b-9686-4b77db2e7df8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.681807 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.700550 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.701270 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c0b8ab-c36f-4322-a83d-436af58b8961-config-data\") pod \"nova-metadata-0\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.701342 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-config\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.701381 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.701412 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt7jt\" (UniqueName: \"kubernetes.io/projected/a8c0b8ab-c36f-4322-a83d-436af58b8961-kube-api-access-wt7jt\") pod \"nova-metadata-0\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.701450 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-dns-svc\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.701522 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c0b8ab-c36f-4322-a83d-436af58b8961-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.701551 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-894xs\" (UniqueName: \"kubernetes.io/projected/8087212c-4a3c-486c-a884-fe42bbc31fa7-kube-api-access-894xs\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.701606 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8c0b8ab-c36f-4322-a83d-436af58b8961-logs\") pod \"nova-metadata-0\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.701661 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.702595 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8c0b8ab-c36f-4322-a83d-436af58b8961-logs\") pod \"nova-metadata-0\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.708843 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c0b8ab-c36f-4322-a83d-436af58b8961-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.726166 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt7jt\" (UniqueName: \"kubernetes.io/projected/a8c0b8ab-c36f-4322-a83d-436af58b8961-kube-api-access-wt7jt\") pod \"nova-metadata-0\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.727082 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c0b8ab-c36f-4322-a83d-436af58b8961-config-data\") pod \"nova-metadata-0\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.803503 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-config\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.803569 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.803609 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-dns-svc\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.803677 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-894xs\" (UniqueName: \"kubernetes.io/projected/8087212c-4a3c-486c-a884-fe42bbc31fa7-kube-api-access-894xs\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.803734 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.804589 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.805154 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-config\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.806053 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.807563 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-dns-svc\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.834592 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.841731 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-894xs\" (UniqueName: \"kubernetes.io/projected/8087212c-4a3c-486c-a884-fe42bbc31fa7-kube-api-access-894xs\") pod \"dnsmasq-dns-566b5b7845-c4ggj\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.879547 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:53:58 crc kubenswrapper[4775]: I1125 19:53:58.897163 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:53:59 crc kubenswrapper[4775]: W1125 19:53:59.023657 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda74e1346_a6a0_455c_917a_fa611dc53263.slice/crio-3b1547715f1927c1f56bbff7fb5f939583e9bce154d98a46405176da391e906d WatchSource:0}: Error finding container 3b1547715f1927c1f56bbff7fb5f939583e9bce154d98a46405176da391e906d: Status 404 returned error can't find the container with id 3b1547715f1927c1f56bbff7fb5f939583e9bce154d98a46405176da391e906d Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.025877 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xd9d5"] Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.131115 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k596p"] Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.134600 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.136166 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.138482 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.140624 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k596p"] Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.165327 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xd9d5" event={"ID":"a74e1346-a6a0-455c-917a-fa611dc53263","Type":"ContainerStarted","Data":"3b1547715f1927c1f56bbff7fb5f939583e9bce154d98a46405176da391e906d"} Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.168927 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b2f763f2-c489-42ba-af44-450243c60955","Type":"ContainerStarted","Data":"ce29dc838c13214f72470d26a0704353c943ab0f518a874172a320ccd42a4e63"} Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.224022 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-scripts\") pod \"nova-cell1-conductor-db-sync-k596p\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.224146 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k596p\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.224183 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-config-data\") pod \"nova-cell1-conductor-db-sync-k596p\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.224289 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6gnl\" (UniqueName: \"kubernetes.io/projected/21bd9c8a-3662-40f1-b939-ed5320b65bcb-kube-api-access-m6gnl\") pod \"nova-cell1-conductor-db-sync-k596p\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.267914 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.325725 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-scripts\") pod \"nova-cell1-conductor-db-sync-k596p\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.326043 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k596p\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.326733 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-config-data\") pod \"nova-cell1-conductor-db-sync-k596p\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.326869 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6gnl\" (UniqueName: \"kubernetes.io/projected/21bd9c8a-3662-40f1-b939-ed5320b65bcb-kube-api-access-m6gnl\") pod \"nova-cell1-conductor-db-sync-k596p\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.339980 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-scripts\") pod \"nova-cell1-conductor-db-sync-k596p\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.340195 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k596p\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.346920 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-config-data\") pod \"nova-cell1-conductor-db-sync-k596p\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.367349 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.389223 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6gnl\" (UniqueName: \"kubernetes.io/projected/21bd9c8a-3662-40f1-b939-ed5320b65bcb-kube-api-access-m6gnl\") pod \"nova-cell1-conductor-db-sync-k596p\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.453902 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.529520 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.539534 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-c4ggj"] Nov 25 19:53:59 crc kubenswrapper[4775]: I1125 19:53:59.908764 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k596p"] Nov 25 19:53:59 crc kubenswrapper[4775]: W1125 19:53:59.911574 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21bd9c8a_3662_40f1_b939_ed5320b65bcb.slice/crio-560140d55c871f4acf6702cc662b365eb27c11e62d60a269589a35f2656d02e2 WatchSource:0}: Error finding container 560140d55c871f4acf6702cc662b365eb27c11e62d60a269589a35f2656d02e2: Status 404 returned error can't find the container with id 560140d55c871f4acf6702cc662b365eb27c11e62d60a269589a35f2656d02e2 Nov 25 19:54:00 crc kubenswrapper[4775]: I1125 19:54:00.189193 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7ee223f-18f4-4136-87e1-3074d439352d","Type":"ContainerStarted","Data":"981c6a1fd5d970d8d4fee9a36e02e810bc07251f635f3a99e67a7386c61084c3"} Nov 25 19:54:00 crc kubenswrapper[4775]: I1125 19:54:00.191195 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8c0b8ab-c36f-4322-a83d-436af58b8961","Type":"ContainerStarted","Data":"162e48cd4ad4e9c115cbf02885724181c6c7eb67dd697d10a957b43d4fa46251"} Nov 25 19:54:00 crc kubenswrapper[4775]: I1125 19:54:00.193369 4775 generic.go:334] "Generic (PLEG): container finished" podID="8087212c-4a3c-486c-a884-fe42bbc31fa7" containerID="658fd9e83c16e2ff23cc0e046d24da0d467a9783638e7bc82bab4a8fc0f81ace" exitCode=0 Nov 25 19:54:00 crc kubenswrapper[4775]: I1125 19:54:00.193423 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" event={"ID":"8087212c-4a3c-486c-a884-fe42bbc31fa7","Type":"ContainerDied","Data":"658fd9e83c16e2ff23cc0e046d24da0d467a9783638e7bc82bab4a8fc0f81ace"} Nov 25 19:54:00 crc kubenswrapper[4775]: I1125 19:54:00.193438 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" event={"ID":"8087212c-4a3c-486c-a884-fe42bbc31fa7","Type":"ContainerStarted","Data":"2cceb2c1380fdd29c79df88fabde65384950092765a2f8c7a8f6a576572ac51c"} Nov 25 19:54:00 crc kubenswrapper[4775]: I1125 19:54:00.196430 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k596p" event={"ID":"21bd9c8a-3662-40f1-b939-ed5320b65bcb","Type":"ContainerStarted","Data":"b0bb3eb38364026944a29a837a8392eaa34f4b373db67104c8728fb713c61bef"} Nov 25 19:54:00 crc kubenswrapper[4775]: I1125 19:54:00.196629 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k596p" event={"ID":"21bd9c8a-3662-40f1-b939-ed5320b65bcb","Type":"ContainerStarted","Data":"560140d55c871f4acf6702cc662b365eb27c11e62d60a269589a35f2656d02e2"} Nov 25 19:54:00 crc kubenswrapper[4775]: I1125 19:54:00.212489 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3956a135-c94d-443b-9686-4b77db2e7df8","Type":"ContainerStarted","Data":"68f694ff5da30e4f25f3508ac61bc6b47416f28ed67c8f56e1a2b64509afb0d3"} Nov 25 19:54:00 crc kubenswrapper[4775]: I1125 19:54:00.214152 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xd9d5" event={"ID":"a74e1346-a6a0-455c-917a-fa611dc53263","Type":"ContainerStarted","Data":"19cd16a1a81ca894740372fc1445068a61e7da0a75134d41872db2050e233d66"} Nov 25 19:54:00 crc kubenswrapper[4775]: I1125 19:54:00.232375 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-k596p" podStartSLOduration=1.232361296 podStartE2EDuration="1.232361296s" podCreationTimestamp="2025-11-25 19:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:00.229098339 +0000 UTC m=+1222.145460705" watchObservedRunningTime="2025-11-25 19:54:00.232361296 +0000 UTC m=+1222.148723662" Nov 25 19:54:00 crc kubenswrapper[4775]: I1125 19:54:00.261540 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-xd9d5" podStartSLOduration=2.261523579 podStartE2EDuration="2.261523579s" podCreationTimestamp="2025-11-25 19:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:00.24926137 +0000 UTC m=+1222.165623736" watchObservedRunningTime="2025-11-25 19:54:00.261523579 +0000 UTC m=+1222.177885945" Nov 25 19:54:01 crc kubenswrapper[4775]: I1125 19:54:01.223781 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" event={"ID":"8087212c-4a3c-486c-a884-fe42bbc31fa7","Type":"ContainerStarted","Data":"c66b95a042c094d9287457daeedbe8ba96422a0454677269d717fd41a37b8967"} Nov 25 19:54:01 crc kubenswrapper[4775]: I1125 19:54:01.224246 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:54:01 crc kubenswrapper[4775]: I1125 19:54:01.250936 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" podStartSLOduration=3.250920739 podStartE2EDuration="3.250920739s" podCreationTimestamp="2025-11-25 19:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:01.245195086 +0000 UTC m=+1223.161557452" watchObservedRunningTime="2025-11-25 19:54:01.250920739 +0000 UTC m=+1223.167283105" Nov 25 19:54:01 crc kubenswrapper[4775]: I1125 19:54:01.849899 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:01 crc kubenswrapper[4775]: I1125 19:54:01.871451 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.237831 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3956a135-c94d-443b-9686-4b77db2e7df8","Type":"ContainerStarted","Data":"d99cdb09ea8f0176603905d92d05e6bf8218f7812befe2983f6b7a4e466eaf2c"} Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.239358 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b2f763f2-c489-42ba-af44-450243c60955","Type":"ContainerStarted","Data":"7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762"} Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.239437 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b2f763f2-c489-42ba-af44-450243c60955","Type":"ContainerStarted","Data":"fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4"} Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.237998 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3956a135-c94d-443b-9686-4b77db2e7df8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d99cdb09ea8f0176603905d92d05e6bf8218f7812befe2983f6b7a4e466eaf2c" gracePeriod=30 Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.242558 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7ee223f-18f4-4136-87e1-3074d439352d","Type":"ContainerStarted","Data":"cd5237a60c2b6325119698e2f90f63501a81854461127cdf5e7494943612f649"} Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.244556 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8c0b8ab-c36f-4322-a83d-436af58b8961","Type":"ContainerStarted","Data":"927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8"} Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.244580 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8c0b8ab-c36f-4322-a83d-436af58b8961","Type":"ContainerStarted","Data":"58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce"} Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.244789 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a8c0b8ab-c36f-4322-a83d-436af58b8961" containerName="nova-metadata-log" containerID="cri-o://58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce" gracePeriod=30 Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.244832 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a8c0b8ab-c36f-4322-a83d-436af58b8961" containerName="nova-metadata-metadata" containerID="cri-o://927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8" gracePeriod=30 Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.259846 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.209036999 podStartE2EDuration="5.259821977s" podCreationTimestamp="2025-11-25 19:53:58 +0000 UTC" firstStartedPulling="2025-11-25 19:53:59.286746332 +0000 UTC m=+1221.203108698" lastFinishedPulling="2025-11-25 19:54:02.33753131 +0000 UTC m=+1224.253893676" observedRunningTime="2025-11-25 19:54:03.25399677 +0000 UTC m=+1225.170359146" watchObservedRunningTime="2025-11-25 19:54:03.259821977 +0000 UTC m=+1225.176184343" Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.294740 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.320290127 podStartE2EDuration="5.294721174s" podCreationTimestamp="2025-11-25 19:53:58 +0000 UTC" firstStartedPulling="2025-11-25 19:53:59.366279068 +0000 UTC m=+1221.282641434" lastFinishedPulling="2025-11-25 19:54:02.340710115 +0000 UTC m=+1224.257072481" observedRunningTime="2025-11-25 19:54:03.293912693 +0000 UTC m=+1225.210275059" watchObservedRunningTime="2025-11-25 19:54:03.294721174 +0000 UTC m=+1225.211083530" Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.298506 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.534680426 podStartE2EDuration="5.298493566s" podCreationTimestamp="2025-11-25 19:53:58 +0000 UTC" firstStartedPulling="2025-11-25 19:53:59.573935925 +0000 UTC m=+1221.490298291" lastFinishedPulling="2025-11-25 19:54:02.337749065 +0000 UTC m=+1224.254111431" observedRunningTime="2025-11-25 19:54:03.280278166 +0000 UTC m=+1225.196640532" watchObservedRunningTime="2025-11-25 19:54:03.298493566 +0000 UTC m=+1225.214855932" Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.313607 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.8762240719999999 podStartE2EDuration="5.313587801s" podCreationTimestamp="2025-11-25 19:53:58 +0000 UTC" firstStartedPulling="2025-11-25 19:53:58.900438148 +0000 UTC m=+1220.816800514" lastFinishedPulling="2025-11-25 19:54:02.337801877 +0000 UTC m=+1224.254164243" observedRunningTime="2025-11-25 19:54:03.309713887 +0000 UTC m=+1225.226076263" watchObservedRunningTime="2025-11-25 19:54:03.313587801 +0000 UTC m=+1225.229950177" Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.682263 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.701835 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.835118 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.835179 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.890541 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.969390 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c0b8ab-c36f-4322-a83d-436af58b8961-config-data\") pod \"a8c0b8ab-c36f-4322-a83d-436af58b8961\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.969523 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c0b8ab-c36f-4322-a83d-436af58b8961-combined-ca-bundle\") pod \"a8c0b8ab-c36f-4322-a83d-436af58b8961\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.969611 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8c0b8ab-c36f-4322-a83d-436af58b8961-logs\") pod \"a8c0b8ab-c36f-4322-a83d-436af58b8961\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.969638 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt7jt\" (UniqueName: \"kubernetes.io/projected/a8c0b8ab-c36f-4322-a83d-436af58b8961-kube-api-access-wt7jt\") pod \"a8c0b8ab-c36f-4322-a83d-436af58b8961\" (UID: \"a8c0b8ab-c36f-4322-a83d-436af58b8961\") " Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.970015 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8c0b8ab-c36f-4322-a83d-436af58b8961-logs" (OuterVolumeSpecName: "logs") pod "a8c0b8ab-c36f-4322-a83d-436af58b8961" (UID: "a8c0b8ab-c36f-4322-a83d-436af58b8961"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.974752 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c0b8ab-c36f-4322-a83d-436af58b8961-kube-api-access-wt7jt" (OuterVolumeSpecName: "kube-api-access-wt7jt") pod "a8c0b8ab-c36f-4322-a83d-436af58b8961" (UID: "a8c0b8ab-c36f-4322-a83d-436af58b8961"). InnerVolumeSpecName "kube-api-access-wt7jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:03 crc kubenswrapper[4775]: I1125 19:54:03.996939 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c0b8ab-c36f-4322-a83d-436af58b8961-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8c0b8ab-c36f-4322-a83d-436af58b8961" (UID: "a8c0b8ab-c36f-4322-a83d-436af58b8961"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.016677 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c0b8ab-c36f-4322-a83d-436af58b8961-config-data" (OuterVolumeSpecName: "config-data") pod "a8c0b8ab-c36f-4322-a83d-436af58b8961" (UID: "a8c0b8ab-c36f-4322-a83d-436af58b8961"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.071440 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c0b8ab-c36f-4322-a83d-436af58b8961-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.071475 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8c0b8ab-c36f-4322-a83d-436af58b8961-logs\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.071489 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt7jt\" (UniqueName: \"kubernetes.io/projected/a8c0b8ab-c36f-4322-a83d-436af58b8961-kube-api-access-wt7jt\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.071502 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c0b8ab-c36f-4322-a83d-436af58b8961-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.254522 4775 generic.go:334] "Generic (PLEG): container finished" podID="a8c0b8ab-c36f-4322-a83d-436af58b8961" containerID="927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8" exitCode=0 Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.254554 4775 generic.go:334] "Generic (PLEG): container finished" podID="a8c0b8ab-c36f-4322-a83d-436af58b8961" containerID="58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce" exitCode=143 Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.254567 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8c0b8ab-c36f-4322-a83d-436af58b8961","Type":"ContainerDied","Data":"927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8"} Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.254621 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8c0b8ab-c36f-4322-a83d-436af58b8961","Type":"ContainerDied","Data":"58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce"} Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.254639 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8c0b8ab-c36f-4322-a83d-436af58b8961","Type":"ContainerDied","Data":"162e48cd4ad4e9c115cbf02885724181c6c7eb67dd697d10a957b43d4fa46251"} Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.254582 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.254677 4775 scope.go:117] "RemoveContainer" containerID="927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.315126 4775 scope.go:117] "RemoveContainer" containerID="58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.327641 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.345961 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.369406 4775 scope.go:117] "RemoveContainer" containerID="927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8" Nov 25 19:54:04 crc kubenswrapper[4775]: E1125 19:54:04.370046 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8\": container with ID starting with 927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8 not found: ID does not exist" containerID="927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.370100 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8"} err="failed to get container status \"927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8\": rpc error: code = NotFound desc = could not find container \"927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8\": container with ID starting with 927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8 not found: ID does not exist" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.370132 4775 scope.go:117] "RemoveContainer" containerID="58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce" Nov 25 19:54:04 crc kubenswrapper[4775]: E1125 19:54:04.370400 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce\": container with ID starting with 58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce not found: ID does not exist" containerID="58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.370423 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce"} err="failed to get container status \"58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce\": rpc error: code = NotFound desc = could not find container \"58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce\": container with ID starting with 58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce not found: ID does not exist" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.370440 4775 scope.go:117] "RemoveContainer" containerID="927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.370900 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8"} err="failed to get container status \"927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8\": rpc error: code = NotFound desc = could not find container \"927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8\": container with ID starting with 927b4b99cbdaad334dee6f8bdd2c22899dd08e2a9736aad1f7b35c0c5699a5b8 not found: ID does not exist" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.370940 4775 scope.go:117] "RemoveContainer" containerID="58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.371158 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce"} err="failed to get container status \"58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce\": rpc error: code = NotFound desc = could not find container \"58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce\": container with ID starting with 58681aaaa466b3a7085d33c3099dc90a9b2e321c2924c33e6d6e84662361d5ce not found: ID does not exist" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.389376 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:04 crc kubenswrapper[4775]: E1125 19:54:04.389835 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c0b8ab-c36f-4322-a83d-436af58b8961" containerName="nova-metadata-metadata" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.389850 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c0b8ab-c36f-4322-a83d-436af58b8961" containerName="nova-metadata-metadata" Nov 25 19:54:04 crc kubenswrapper[4775]: E1125 19:54:04.389888 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c0b8ab-c36f-4322-a83d-436af58b8961" containerName="nova-metadata-log" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.389894 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c0b8ab-c36f-4322-a83d-436af58b8961" containerName="nova-metadata-log" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.390075 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c0b8ab-c36f-4322-a83d-436af58b8961" containerName="nova-metadata-log" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.390084 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c0b8ab-c36f-4322-a83d-436af58b8961" containerName="nova-metadata-metadata" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.391064 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.393120 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.393637 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.402606 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.494365 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc7ecda8-08e1-48d5-9695-6b2a47c08755-logs\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.494481 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.494520 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.494596 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v7qz\" (UniqueName: \"kubernetes.io/projected/fc7ecda8-08e1-48d5-9695-6b2a47c08755-kube-api-access-7v7qz\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.494616 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-config-data\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.596025 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-config-data\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.596099 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc7ecda8-08e1-48d5-9695-6b2a47c08755-logs\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.596204 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.596251 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.596323 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v7qz\" (UniqueName: \"kubernetes.io/projected/fc7ecda8-08e1-48d5-9695-6b2a47c08755-kube-api-access-7v7qz\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.596620 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc7ecda8-08e1-48d5-9695-6b2a47c08755-logs\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.601165 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.602167 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-config-data\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.607321 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.616393 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v7qz\" (UniqueName: \"kubernetes.io/projected/fc7ecda8-08e1-48d5-9695-6b2a47c08755-kube-api-access-7v7qz\") pod \"nova-metadata-0\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.719296 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:54:04 crc kubenswrapper[4775]: I1125 19:54:04.871900 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8c0b8ab-c36f-4322-a83d-436af58b8961" path="/var/lib/kubelet/pods/a8c0b8ab-c36f-4322-a83d-436af58b8961/volumes" Nov 25 19:54:05 crc kubenswrapper[4775]: I1125 19:54:05.219277 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:05 crc kubenswrapper[4775]: W1125 19:54:05.224736 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc7ecda8_08e1_48d5_9695_6b2a47c08755.slice/crio-4af93d1194188110757648cff89a0f6957fe0d86ca8e2e3d75112ebb4aefe335 WatchSource:0}: Error finding container 4af93d1194188110757648cff89a0f6957fe0d86ca8e2e3d75112ebb4aefe335: Status 404 returned error can't find the container with id 4af93d1194188110757648cff89a0f6957fe0d86ca8e2e3d75112ebb4aefe335 Nov 25 19:54:05 crc kubenswrapper[4775]: I1125 19:54:05.266026 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fc7ecda8-08e1-48d5-9695-6b2a47c08755","Type":"ContainerStarted","Data":"4af93d1194188110757648cff89a0f6957fe0d86ca8e2e3d75112ebb4aefe335"} Nov 25 19:54:06 crc kubenswrapper[4775]: I1125 19:54:06.272085 4775 generic.go:334] "Generic (PLEG): container finished" podID="a74e1346-a6a0-455c-917a-fa611dc53263" containerID="19cd16a1a81ca894740372fc1445068a61e7da0a75134d41872db2050e233d66" exitCode=0 Nov 25 19:54:06 crc kubenswrapper[4775]: I1125 19:54:06.272223 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xd9d5" event={"ID":"a74e1346-a6a0-455c-917a-fa611dc53263","Type":"ContainerDied","Data":"19cd16a1a81ca894740372fc1445068a61e7da0a75134d41872db2050e233d66"} Nov 25 19:54:06 crc kubenswrapper[4775]: I1125 19:54:06.274594 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fc7ecda8-08e1-48d5-9695-6b2a47c08755","Type":"ContainerStarted","Data":"62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0"} Nov 25 19:54:06 crc kubenswrapper[4775]: I1125 19:54:06.274725 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fc7ecda8-08e1-48d5-9695-6b2a47c08755","Type":"ContainerStarted","Data":"37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557"} Nov 25 19:54:06 crc kubenswrapper[4775]: I1125 19:54:06.337219 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.337197458 podStartE2EDuration="2.337197458s" podCreationTimestamp="2025-11-25 19:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:06.327194269 +0000 UTC m=+1228.243556635" watchObservedRunningTime="2025-11-25 19:54:06.337197458 +0000 UTC m=+1228.253559824" Nov 25 19:54:06 crc kubenswrapper[4775]: I1125 19:54:06.545682 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 19:54:07 crc kubenswrapper[4775]: I1125 19:54:07.742836 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:54:07 crc kubenswrapper[4775]: I1125 19:54:07.852674 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lkb4\" (UniqueName: \"kubernetes.io/projected/a74e1346-a6a0-455c-917a-fa611dc53263-kube-api-access-6lkb4\") pod \"a74e1346-a6a0-455c-917a-fa611dc53263\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " Nov 25 19:54:07 crc kubenswrapper[4775]: I1125 19:54:07.853235 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-scripts\") pod \"a74e1346-a6a0-455c-917a-fa611dc53263\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " Nov 25 19:54:07 crc kubenswrapper[4775]: I1125 19:54:07.853382 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-config-data\") pod \"a74e1346-a6a0-455c-917a-fa611dc53263\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " Nov 25 19:54:07 crc kubenswrapper[4775]: I1125 19:54:07.853619 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-combined-ca-bundle\") pod \"a74e1346-a6a0-455c-917a-fa611dc53263\" (UID: \"a74e1346-a6a0-455c-917a-fa611dc53263\") " Nov 25 19:54:07 crc kubenswrapper[4775]: I1125 19:54:07.861868 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a74e1346-a6a0-455c-917a-fa611dc53263-kube-api-access-6lkb4" (OuterVolumeSpecName: "kube-api-access-6lkb4") pod "a74e1346-a6a0-455c-917a-fa611dc53263" (UID: "a74e1346-a6a0-455c-917a-fa611dc53263"). InnerVolumeSpecName "kube-api-access-6lkb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:07 crc kubenswrapper[4775]: I1125 19:54:07.862811 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-scripts" (OuterVolumeSpecName: "scripts") pod "a74e1346-a6a0-455c-917a-fa611dc53263" (UID: "a74e1346-a6a0-455c-917a-fa611dc53263"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:07 crc kubenswrapper[4775]: I1125 19:54:07.892810 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a74e1346-a6a0-455c-917a-fa611dc53263" (UID: "a74e1346-a6a0-455c-917a-fa611dc53263"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:07 crc kubenswrapper[4775]: I1125 19:54:07.895713 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-config-data" (OuterVolumeSpecName: "config-data") pod "a74e1346-a6a0-455c-917a-fa611dc53263" (UID: "a74e1346-a6a0-455c-917a-fa611dc53263"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:07 crc kubenswrapper[4775]: I1125 19:54:07.955944 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lkb4\" (UniqueName: \"kubernetes.io/projected/a74e1346-a6a0-455c-917a-fa611dc53263-kube-api-access-6lkb4\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:07 crc kubenswrapper[4775]: I1125 19:54:07.955978 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:07 crc kubenswrapper[4775]: I1125 19:54:07.955988 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:07 crc kubenswrapper[4775]: I1125 19:54:07.955997 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74e1346-a6a0-455c-917a-fa611dc53263-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.295720 4775 generic.go:334] "Generic (PLEG): container finished" podID="21bd9c8a-3662-40f1-b939-ed5320b65bcb" containerID="b0bb3eb38364026944a29a837a8392eaa34f4b373db67104c8728fb713c61bef" exitCode=0 Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.295809 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k596p" event={"ID":"21bd9c8a-3662-40f1-b939-ed5320b65bcb","Type":"ContainerDied","Data":"b0bb3eb38364026944a29a837a8392eaa34f4b373db67104c8728fb713c61bef"} Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.297672 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xd9d5" event={"ID":"a74e1346-a6a0-455c-917a-fa611dc53263","Type":"ContainerDied","Data":"3b1547715f1927c1f56bbff7fb5f939583e9bce154d98a46405176da391e906d"} Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.297699 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b1547715f1927c1f56bbff7fb5f939583e9bce154d98a46405176da391e906d" Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.297757 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xd9d5" Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.463352 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.463827 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6b6ac464-ee79-41a6-8977-0db9e5044ee9" containerName="kube-state-metrics" containerID="cri-o://5b19f049c740c75ce1acb906214c8d9fee261f0adf65c1dc0641463f33a02bdb" gracePeriod=30 Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.539036 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.539292 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b2f763f2-c489-42ba-af44-450243c60955" containerName="nova-api-log" containerID="cri-o://fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4" gracePeriod=30 Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.539782 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b2f763f2-c489-42ba-af44-450243c60955" containerName="nova-api-api" containerID="cri-o://7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762" gracePeriod=30 Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.553510 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.553733 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a7ee223f-18f4-4136-87e1-3074d439352d" containerName="nova-scheduler-scheduler" containerID="cri-o://cd5237a60c2b6325119698e2f90f63501a81854461127cdf5e7494943612f649" gracePeriod=30 Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.581443 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.581659 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fc7ecda8-08e1-48d5-9695-6b2a47c08755" containerName="nova-metadata-log" containerID="cri-o://37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557" gracePeriod=30 Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.582023 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fc7ecda8-08e1-48d5-9695-6b2a47c08755" containerName="nova-metadata-metadata" containerID="cri-o://62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0" gracePeriod=30 Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.907843 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.970247 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-bn57q"] Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.973040 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 19:54:08 crc kubenswrapper[4775]: I1125 19:54:08.970490 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" podUID="aa5215c2-c105-433b-b02f-be661535774c" containerName="dnsmasq-dns" containerID="cri-o://da719608c5cae4286af129bf9ee801e3164136bbc2dfc2dda1ee42be51a19dcd" gracePeriod=10 Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.073903 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96hcn\" (UniqueName: \"kubernetes.io/projected/6b6ac464-ee79-41a6-8977-0db9e5044ee9-kube-api-access-96hcn\") pod \"6b6ac464-ee79-41a6-8977-0db9e5044ee9\" (UID: \"6b6ac464-ee79-41a6-8977-0db9e5044ee9\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.079606 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6ac464-ee79-41a6-8977-0db9e5044ee9-kube-api-access-96hcn" (OuterVolumeSpecName: "kube-api-access-96hcn") pod "6b6ac464-ee79-41a6-8977-0db9e5044ee9" (UID: "6b6ac464-ee79-41a6-8977-0db9e5044ee9"). InnerVolumeSpecName "kube-api-access-96hcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.151596 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.176416 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v7qz\" (UniqueName: \"kubernetes.io/projected/fc7ecda8-08e1-48d5-9695-6b2a47c08755-kube-api-access-7v7qz\") pod \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.176483 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-combined-ca-bundle\") pod \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.176513 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-config-data\") pod \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.176559 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc7ecda8-08e1-48d5-9695-6b2a47c08755-logs\") pod \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.176592 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-nova-metadata-tls-certs\") pod \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\" (UID: \"fc7ecda8-08e1-48d5-9695-6b2a47c08755\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.177025 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96hcn\" (UniqueName: \"kubernetes.io/projected/6b6ac464-ee79-41a6-8977-0db9e5044ee9-kube-api-access-96hcn\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.177251 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc7ecda8-08e1-48d5-9695-6b2a47c08755-logs" (OuterVolumeSpecName: "logs") pod "fc7ecda8-08e1-48d5-9695-6b2a47c08755" (UID: "fc7ecda8-08e1-48d5-9695-6b2a47c08755"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.180498 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc7ecda8-08e1-48d5-9695-6b2a47c08755-kube-api-access-7v7qz" (OuterVolumeSpecName: "kube-api-access-7v7qz") pod "fc7ecda8-08e1-48d5-9695-6b2a47c08755" (UID: "fc7ecda8-08e1-48d5-9695-6b2a47c08755"). InnerVolumeSpecName "kube-api-access-7v7qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.181290 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.208657 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-config-data" (OuterVolumeSpecName: "config-data") pod "fc7ecda8-08e1-48d5-9695-6b2a47c08755" (UID: "fc7ecda8-08e1-48d5-9695-6b2a47c08755"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.209236 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc7ecda8-08e1-48d5-9695-6b2a47c08755" (UID: "fc7ecda8-08e1-48d5-9695-6b2a47c08755"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.229931 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fc7ecda8-08e1-48d5-9695-6b2a47c08755" (UID: "fc7ecda8-08e1-48d5-9695-6b2a47c08755"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.277973 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2f763f2-c489-42ba-af44-450243c60955-logs\") pod \"b2f763f2-c489-42ba-af44-450243c60955\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.278056 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2f763f2-c489-42ba-af44-450243c60955-combined-ca-bundle\") pod \"b2f763f2-c489-42ba-af44-450243c60955\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.278090 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2f763f2-c489-42ba-af44-450243c60955-config-data\") pod \"b2f763f2-c489-42ba-af44-450243c60955\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.278139 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db6r7\" (UniqueName: \"kubernetes.io/projected/b2f763f2-c489-42ba-af44-450243c60955-kube-api-access-db6r7\") pod \"b2f763f2-c489-42ba-af44-450243c60955\" (UID: \"b2f763f2-c489-42ba-af44-450243c60955\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.278475 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2f763f2-c489-42ba-af44-450243c60955-logs" (OuterVolumeSpecName: "logs") pod "b2f763f2-c489-42ba-af44-450243c60955" (UID: "b2f763f2-c489-42ba-af44-450243c60955"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.279035 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v7qz\" (UniqueName: \"kubernetes.io/projected/fc7ecda8-08e1-48d5-9695-6b2a47c08755-kube-api-access-7v7qz\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.279049 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.279058 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.279067 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc7ecda8-08e1-48d5-9695-6b2a47c08755-logs\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.279077 4775 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc7ecda8-08e1-48d5-9695-6b2a47c08755-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.279085 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2f763f2-c489-42ba-af44-450243c60955-logs\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.284819 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2f763f2-c489-42ba-af44-450243c60955-kube-api-access-db6r7" (OuterVolumeSpecName: "kube-api-access-db6r7") pod "b2f763f2-c489-42ba-af44-450243c60955" (UID: "b2f763f2-c489-42ba-af44-450243c60955"). InnerVolumeSpecName "kube-api-access-db6r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.302368 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2f763f2-c489-42ba-af44-450243c60955-config-data" (OuterVolumeSpecName: "config-data") pod "b2f763f2-c489-42ba-af44-450243c60955" (UID: "b2f763f2-c489-42ba-af44-450243c60955"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.304812 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2f763f2-c489-42ba-af44-450243c60955-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2f763f2-c489-42ba-af44-450243c60955" (UID: "b2f763f2-c489-42ba-af44-450243c60955"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.307166 4775 generic.go:334] "Generic (PLEG): container finished" podID="6b6ac464-ee79-41a6-8977-0db9e5044ee9" containerID="5b19f049c740c75ce1acb906214c8d9fee261f0adf65c1dc0641463f33a02bdb" exitCode=2 Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.307227 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.307240 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b6ac464-ee79-41a6-8977-0db9e5044ee9","Type":"ContainerDied","Data":"5b19f049c740c75ce1acb906214c8d9fee261f0adf65c1dc0641463f33a02bdb"} Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.307350 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b6ac464-ee79-41a6-8977-0db9e5044ee9","Type":"ContainerDied","Data":"6be61731458cf2b907c96f971ba5a8cfe0fcb3e67e97aedef88af66e9c881677"} Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.307373 4775 scope.go:117] "RemoveContainer" containerID="5b19f049c740c75ce1acb906214c8d9fee261f0adf65c1dc0641463f33a02bdb" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.308834 4775 generic.go:334] "Generic (PLEG): container finished" podID="b2f763f2-c489-42ba-af44-450243c60955" containerID="7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762" exitCode=0 Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.308852 4775 generic.go:334] "Generic (PLEG): container finished" podID="b2f763f2-c489-42ba-af44-450243c60955" containerID="fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4" exitCode=143 Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.308881 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b2f763f2-c489-42ba-af44-450243c60955","Type":"ContainerDied","Data":"7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762"} Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.308894 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b2f763f2-c489-42ba-af44-450243c60955","Type":"ContainerDied","Data":"fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4"} Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.308903 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b2f763f2-c489-42ba-af44-450243c60955","Type":"ContainerDied","Data":"ce29dc838c13214f72470d26a0704353c943ab0f518a874172a320ccd42a4e63"} Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.308956 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.312205 4775 generic.go:334] "Generic (PLEG): container finished" podID="fc7ecda8-08e1-48d5-9695-6b2a47c08755" containerID="62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0" exitCode=0 Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.312229 4775 generic.go:334] "Generic (PLEG): container finished" podID="fc7ecda8-08e1-48d5-9695-6b2a47c08755" containerID="37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557" exitCode=143 Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.312264 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fc7ecda8-08e1-48d5-9695-6b2a47c08755","Type":"ContainerDied","Data":"62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0"} Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.312287 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fc7ecda8-08e1-48d5-9695-6b2a47c08755","Type":"ContainerDied","Data":"37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557"} Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.312297 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fc7ecda8-08e1-48d5-9695-6b2a47c08755","Type":"ContainerDied","Data":"4af93d1194188110757648cff89a0f6957fe0d86ca8e2e3d75112ebb4aefe335"} Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.312340 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.316150 4775 generic.go:334] "Generic (PLEG): container finished" podID="aa5215c2-c105-433b-b02f-be661535774c" containerID="da719608c5cae4286af129bf9ee801e3164136bbc2dfc2dda1ee42be51a19dcd" exitCode=0 Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.316284 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" event={"ID":"aa5215c2-c105-433b-b02f-be661535774c","Type":"ContainerDied","Data":"da719608c5cae4286af129bf9ee801e3164136bbc2dfc2dda1ee42be51a19dcd"} Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.329317 4775 scope.go:117] "RemoveContainer" containerID="5b19f049c740c75ce1acb906214c8d9fee261f0adf65c1dc0641463f33a02bdb" Nov 25 19:54:09 crc kubenswrapper[4775]: E1125 19:54:09.329836 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b19f049c740c75ce1acb906214c8d9fee261f0adf65c1dc0641463f33a02bdb\": container with ID starting with 5b19f049c740c75ce1acb906214c8d9fee261f0adf65c1dc0641463f33a02bdb not found: ID does not exist" containerID="5b19f049c740c75ce1acb906214c8d9fee261f0adf65c1dc0641463f33a02bdb" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.329886 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b19f049c740c75ce1acb906214c8d9fee261f0adf65c1dc0641463f33a02bdb"} err="failed to get container status \"5b19f049c740c75ce1acb906214c8d9fee261f0adf65c1dc0641463f33a02bdb\": rpc error: code = NotFound desc = could not find container \"5b19f049c740c75ce1acb906214c8d9fee261f0adf65c1dc0641463f33a02bdb\": container with ID starting with 5b19f049c740c75ce1acb906214c8d9fee261f0adf65c1dc0641463f33a02bdb not found: ID does not exist" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.329915 4775 scope.go:117] "RemoveContainer" containerID="7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.362164 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.386673 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.389307 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2f763f2-c489-42ba-af44-450243c60955-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.389340 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2f763f2-c489-42ba-af44-450243c60955-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.389352 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db6r7\" (UniqueName: \"kubernetes.io/projected/b2f763f2-c489-42ba-af44-450243c60955-kube-api-access-db6r7\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.391735 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.405821 4775 scope.go:117] "RemoveContainer" containerID="fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.412908 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 19:54:09 crc kubenswrapper[4775]: E1125 19:54:09.413380 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6ac464-ee79-41a6-8977-0db9e5044ee9" containerName="kube-state-metrics" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.413396 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6ac464-ee79-41a6-8977-0db9e5044ee9" containerName="kube-state-metrics" Nov 25 19:54:09 crc kubenswrapper[4775]: E1125 19:54:09.413413 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f763f2-c489-42ba-af44-450243c60955" containerName="nova-api-api" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.413440 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f763f2-c489-42ba-af44-450243c60955" containerName="nova-api-api" Nov 25 19:54:09 crc kubenswrapper[4775]: E1125 19:54:09.413452 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc7ecda8-08e1-48d5-9695-6b2a47c08755" containerName="nova-metadata-log" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.413457 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7ecda8-08e1-48d5-9695-6b2a47c08755" containerName="nova-metadata-log" Nov 25 19:54:09 crc kubenswrapper[4775]: E1125 19:54:09.413473 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc7ecda8-08e1-48d5-9695-6b2a47c08755" containerName="nova-metadata-metadata" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.413478 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7ecda8-08e1-48d5-9695-6b2a47c08755" containerName="nova-metadata-metadata" Nov 25 19:54:09 crc kubenswrapper[4775]: E1125 19:54:09.413512 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f763f2-c489-42ba-af44-450243c60955" containerName="nova-api-log" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.413519 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f763f2-c489-42ba-af44-450243c60955" containerName="nova-api-log" Nov 25 19:54:09 crc kubenswrapper[4775]: E1125 19:54:09.413530 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a74e1346-a6a0-455c-917a-fa611dc53263" containerName="nova-manage" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.413536 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a74e1346-a6a0-455c-917a-fa611dc53263" containerName="nova-manage" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.420563 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.420916 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a74e1346-a6a0-455c-917a-fa611dc53263" containerName="nova-manage" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.420946 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc7ecda8-08e1-48d5-9695-6b2a47c08755" containerName="nova-metadata-metadata" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.420965 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f763f2-c489-42ba-af44-450243c60955" containerName="nova-api-api" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.420980 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6ac464-ee79-41a6-8977-0db9e5044ee9" containerName="kube-state-metrics" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.420990 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc7ecda8-08e1-48d5-9695-6b2a47c08755" containerName="nova-metadata-log" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.421010 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f763f2-c489-42ba-af44-450243c60955" containerName="nova-api-log" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.421540 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.421662 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.424236 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.424404 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.429059 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.431969 4775 scope.go:117] "RemoveContainer" containerID="7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762" Nov 25 19:54:09 crc kubenswrapper[4775]: E1125 19:54:09.432386 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762\": container with ID starting with 7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762 not found: ID does not exist" containerID="7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.432419 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762"} err="failed to get container status \"7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762\": rpc error: code = NotFound desc = could not find container \"7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762\": container with ID starting with 7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762 not found: ID does not exist" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.432443 4775 scope.go:117] "RemoveContainer" containerID="fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4" Nov 25 19:54:09 crc kubenswrapper[4775]: E1125 19:54:09.432627 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4\": container with ID starting with fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4 not found: ID does not exist" containerID="fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.432640 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4"} err="failed to get container status \"fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4\": rpc error: code = NotFound desc = could not find container \"fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4\": container with ID starting with fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4 not found: ID does not exist" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.432677 4775 scope.go:117] "RemoveContainer" containerID="7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.432835 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762"} err="failed to get container status \"7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762\": rpc error: code = NotFound desc = could not find container \"7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762\": container with ID starting with 7ef92e58dee9106646685f6051fbce4db252f34c76dfad9107cb2f13271ca762 not found: ID does not exist" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.432857 4775 scope.go:117] "RemoveContainer" containerID="fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.433121 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4"} err="failed to get container status \"fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4\": rpc error: code = NotFound desc = could not find container \"fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4\": container with ID starting with fc4cba3925560c59f5348caf4767a205692768e42f95fe9095edec8324f1a7f4 not found: ID does not exist" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.433133 4775 scope.go:117] "RemoveContainer" containerID="62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.435842 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.467690 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.522101 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:09 crc kubenswrapper[4775]: E1125 19:54:09.522771 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5215c2-c105-433b-b02f-be661535774c" containerName="dnsmasq-dns" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.522785 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5215c2-c105-433b-b02f-be661535774c" containerName="dnsmasq-dns" Nov 25 19:54:09 crc kubenswrapper[4775]: E1125 19:54:09.522822 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5215c2-c105-433b-b02f-be661535774c" containerName="init" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.522828 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5215c2-c105-433b-b02f-be661535774c" containerName="init" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.523148 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5215c2-c105-433b-b02f-be661535774c" containerName="dnsmasq-dns" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.524483 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.525582 4775 scope.go:117] "RemoveContainer" containerID="37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.529347 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.540413 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.609708 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.618214 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-dns-svc\") pod \"aa5215c2-c105-433b-b02f-be661535774c\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.618272 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-ovsdbserver-nb\") pod \"aa5215c2-c105-433b-b02f-be661535774c\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.618291 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-config\") pod \"aa5215c2-c105-433b-b02f-be661535774c\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.618327 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-ovsdbserver-sb\") pod \"aa5215c2-c105-433b-b02f-be661535774c\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.618411 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mptz\" (UniqueName: \"kubernetes.io/projected/aa5215c2-c105-433b-b02f-be661535774c-kube-api-access-9mptz\") pod \"aa5215c2-c105-433b-b02f-be661535774c\" (UID: \"aa5215c2-c105-433b-b02f-be661535774c\") " Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.618629 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s7zd\" (UniqueName: \"kubernetes.io/projected/7a38d394-4c5f-4d60-92af-3407e58769da-kube-api-access-9s7zd\") pod \"kube-state-metrics-0\" (UID: \"7a38d394-4c5f-4d60-92af-3407e58769da\") " pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.618672 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a38d394-4c5f-4d60-92af-3407e58769da-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a38d394-4c5f-4d60-92af-3407e58769da\") " pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.618760 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a38d394-4c5f-4d60-92af-3407e58769da-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a38d394-4c5f-4d60-92af-3407e58769da\") " pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.618793 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a38d394-4c5f-4d60-92af-3407e58769da-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a38d394-4c5f-4d60-92af-3407e58769da\") " pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.627416 4775 scope.go:117] "RemoveContainer" containerID="62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.628374 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.630459 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.638976 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 19:54:09 crc kubenswrapper[4775]: E1125 19:54:09.639298 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0\": container with ID starting with 62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0 not found: ID does not exist" containerID="62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.639328 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0"} err="failed to get container status \"62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0\": rpc error: code = NotFound desc = could not find container \"62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0\": container with ID starting with 62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0 not found: ID does not exist" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.639347 4775 scope.go:117] "RemoveContainer" containerID="37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.642705 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.644376 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5215c2-c105-433b-b02f-be661535774c-kube-api-access-9mptz" (OuterVolumeSpecName: "kube-api-access-9mptz") pod "aa5215c2-c105-433b-b02f-be661535774c" (UID: "aa5215c2-c105-433b-b02f-be661535774c"). InnerVolumeSpecName "kube-api-access-9mptz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: E1125 19:54:09.674216 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557\": container with ID starting with 37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557 not found: ID does not exist" containerID="37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.674277 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557"} err="failed to get container status \"37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557\": rpc error: code = NotFound desc = could not find container \"37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557\": container with ID starting with 37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557 not found: ID does not exist" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.674301 4775 scope.go:117] "RemoveContainer" containerID="62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.680209 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0"} err="failed to get container status \"62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0\": rpc error: code = NotFound desc = could not find container \"62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0\": container with ID starting with 62e20d27952d01c7611022e5d6cd0cd75fc077fa9275864d5ea301e49d092df0 not found: ID does not exist" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.680246 4775 scope.go:117] "RemoveContainer" containerID="37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.694488 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557"} err="failed to get container status \"37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557\": rpc error: code = NotFound desc = could not find container \"37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557\": container with ID starting with 37536b00ff7cbbc276056dd315fbf55b69d92351ab3b6b59a2a34efbed337557 not found: ID does not exist" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720043 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-logs\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720079 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ae35de2-6d67-497a-840a-bc88ddf16205-logs\") pod \"nova-api-0\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720095 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9vq\" (UniqueName: \"kubernetes.io/projected/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-kube-api-access-fw9vq\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720141 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae35de2-6d67-497a-840a-bc88ddf16205-config-data\") pod \"nova-api-0\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720158 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720176 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-config-data\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720232 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a38d394-4c5f-4d60-92af-3407e58769da-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a38d394-4c5f-4d60-92af-3407e58769da\") " pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720250 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720282 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh865\" (UniqueName: \"kubernetes.io/projected/2ae35de2-6d67-497a-840a-bc88ddf16205-kube-api-access-fh865\") pod \"nova-api-0\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720301 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a38d394-4c5f-4d60-92af-3407e58769da-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a38d394-4c5f-4d60-92af-3407e58769da\") " pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720344 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae35de2-6d67-497a-840a-bc88ddf16205-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720362 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s7zd\" (UniqueName: \"kubernetes.io/projected/7a38d394-4c5f-4d60-92af-3407e58769da-kube-api-access-9s7zd\") pod \"kube-state-metrics-0\" (UID: \"7a38d394-4c5f-4d60-92af-3407e58769da\") " pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720384 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a38d394-4c5f-4d60-92af-3407e58769da-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a38d394-4c5f-4d60-92af-3407e58769da\") " pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.720427 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mptz\" (UniqueName: \"kubernetes.io/projected/aa5215c2-c105-433b-b02f-be661535774c-kube-api-access-9mptz\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.724571 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a38d394-4c5f-4d60-92af-3407e58769da-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a38d394-4c5f-4d60-92af-3407e58769da\") " pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.727799 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a38d394-4c5f-4d60-92af-3407e58769da-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a38d394-4c5f-4d60-92af-3407e58769da\") " pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.736895 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a38d394-4c5f-4d60-92af-3407e58769da-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a38d394-4c5f-4d60-92af-3407e58769da\") " pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.743880 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s7zd\" (UniqueName: \"kubernetes.io/projected/7a38d394-4c5f-4d60-92af-3407e58769da-kube-api-access-9s7zd\") pod \"kube-state-metrics-0\" (UID: \"7a38d394-4c5f-4d60-92af-3407e58769da\") " pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.752761 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.761963 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-config" (OuterVolumeSpecName: "config") pod "aa5215c2-c105-433b-b02f-be661535774c" (UID: "aa5215c2-c105-433b-b02f-be661535774c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.772293 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aa5215c2-c105-433b-b02f-be661535774c" (UID: "aa5215c2-c105-433b-b02f-be661535774c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.795153 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aa5215c2-c105-433b-b02f-be661535774c" (UID: "aa5215c2-c105-433b-b02f-be661535774c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.801196 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aa5215c2-c105-433b-b02f-be661535774c" (UID: "aa5215c2-c105-433b-b02f-be661535774c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.821558 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae35de2-6d67-497a-840a-bc88ddf16205-config-data\") pod \"nova-api-0\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.821608 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.821635 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-config-data\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.821710 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.821741 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh865\" (UniqueName: \"kubernetes.io/projected/2ae35de2-6d67-497a-840a-bc88ddf16205-kube-api-access-fh865\") pod \"nova-api-0\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.821782 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae35de2-6d67-497a-840a-bc88ddf16205-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.821811 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-logs\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.821826 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ae35de2-6d67-497a-840a-bc88ddf16205-logs\") pod \"nova-api-0\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.821842 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9vq\" (UniqueName: \"kubernetes.io/projected/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-kube-api-access-fw9vq\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.821900 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.821909 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.821918 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.821927 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5215c2-c105-433b-b02f-be661535774c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.823916 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-logs\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.825562 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ae35de2-6d67-497a-840a-bc88ddf16205-logs\") pod \"nova-api-0\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.827605 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae35de2-6d67-497a-840a-bc88ddf16205-config-data\") pod \"nova-api-0\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.830958 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae35de2-6d67-497a-840a-bc88ddf16205-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.831829 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-config-data\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.832422 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.836314 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.850589 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9vq\" (UniqueName: \"kubernetes.io/projected/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-kube-api-access-fw9vq\") pod \"nova-metadata-0\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.864842 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh865\" (UniqueName: \"kubernetes.io/projected/2ae35de2-6d67-497a-840a-bc88ddf16205-kube-api-access-fh865\") pod \"nova-api-0\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.900046 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.939244 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:54:09 crc kubenswrapper[4775]: I1125 19:54:09.947548 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.107902 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.108165 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="ceilometer-central-agent" containerID="cri-o://19f5f85b9f176b28bce660ab1cc4c3c45a45ec4a35bcaa573f249f3660f2915d" gracePeriod=30 Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.108573 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="proxy-httpd" containerID="cri-o://650843ed79aabc4dd7fc04e26741791d0fa0f974117f6fb2f172f5c24f0045fe" gracePeriod=30 Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.108621 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="sg-core" containerID="cri-o://a2b57c20392b716a2c25b7324a36191185587e97b2a150befa00273d5925fc77" gracePeriod=30 Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.109114 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="ceilometer-notification-agent" containerID="cri-o://53db6a4ee8a22ff48c990f900f56c37fb3e1dfcf67d9dc3298b4dbd1f64ff6c4" gracePeriod=30 Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.127488 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-config-data\") pod \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.127584 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-scripts\") pod \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.127609 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6gnl\" (UniqueName: \"kubernetes.io/projected/21bd9c8a-3662-40f1-b939-ed5320b65bcb-kube-api-access-m6gnl\") pod \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.127626 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-combined-ca-bundle\") pod \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\" (UID: \"21bd9c8a-3662-40f1-b939-ed5320b65bcb\") " Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.132381 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21bd9c8a-3662-40f1-b939-ed5320b65bcb-kube-api-access-m6gnl" (OuterVolumeSpecName: "kube-api-access-m6gnl") pod "21bd9c8a-3662-40f1-b939-ed5320b65bcb" (UID: "21bd9c8a-3662-40f1-b939-ed5320b65bcb"). InnerVolumeSpecName "kube-api-access-m6gnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.132797 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-scripts" (OuterVolumeSpecName: "scripts") pod "21bd9c8a-3662-40f1-b939-ed5320b65bcb" (UID: "21bd9c8a-3662-40f1-b939-ed5320b65bcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.155302 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-config-data" (OuterVolumeSpecName: "config-data") pod "21bd9c8a-3662-40f1-b939-ed5320b65bcb" (UID: "21bd9c8a-3662-40f1-b939-ed5320b65bcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.163802 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21bd9c8a-3662-40f1-b939-ed5320b65bcb" (UID: "21bd9c8a-3662-40f1-b939-ed5320b65bcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.221218 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.230055 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.230086 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.230095 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6gnl\" (UniqueName: \"kubernetes.io/projected/21bd9c8a-3662-40f1-b939-ed5320b65bcb-kube-api-access-m6gnl\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.230109 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21bd9c8a-3662-40f1-b939-ed5320b65bcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:10 crc kubenswrapper[4775]: W1125 19:54:10.235741 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a38d394_4c5f_4d60_92af_3407e58769da.slice/crio-c4d0859f80a8541b9842a0862b1e8a3a5d1253d9874f4a00f73c8ad9eaf53fe7 WatchSource:0}: Error finding container c4d0859f80a8541b9842a0862b1e8a3a5d1253d9874f4a00f73c8ad9eaf53fe7: Status 404 returned error can't find the container with id c4d0859f80a8541b9842a0862b1e8a3a5d1253d9874f4a00f73c8ad9eaf53fe7 Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.344227 4775 generic.go:334] "Generic (PLEG): container finished" podID="a7ee223f-18f4-4136-87e1-3074d439352d" containerID="cd5237a60c2b6325119698e2f90f63501a81854461127cdf5e7494943612f649" exitCode=0 Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.344288 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7ee223f-18f4-4136-87e1-3074d439352d","Type":"ContainerDied","Data":"cd5237a60c2b6325119698e2f90f63501a81854461127cdf5e7494943612f649"} Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.358461 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" event={"ID":"aa5215c2-c105-433b-b02f-be661535774c","Type":"ContainerDied","Data":"7b2f7db44cde0cb44b0826983e28dbcd045c081b1dfb012f52b8296cec28cab9"} Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.358511 4775 scope.go:117] "RemoveContainer" containerID="da719608c5cae4286af129bf9ee801e3164136bbc2dfc2dda1ee42be51a19dcd" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.359290 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-bn57q" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.368024 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k596p" event={"ID":"21bd9c8a-3662-40f1-b939-ed5320b65bcb","Type":"ContainerDied","Data":"560140d55c871f4acf6702cc662b365eb27c11e62d60a269589a35f2656d02e2"} Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.368076 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="560140d55c871f4acf6702cc662b365eb27c11e62d60a269589a35f2656d02e2" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.368164 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k596p" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.400617 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.421803 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 19:54:10 crc kubenswrapper[4775]: E1125 19:54:10.422290 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21bd9c8a-3662-40f1-b939-ed5320b65bcb" containerName="nova-cell1-conductor-db-sync" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.422317 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="21bd9c8a-3662-40f1-b939-ed5320b65bcb" containerName="nova-cell1-conductor-db-sync" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.422554 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="21bd9c8a-3662-40f1-b939-ed5320b65bcb" containerName="nova-cell1-conductor-db-sync" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.423438 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.425524 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.437412 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.455209 4775 scope.go:117] "RemoveContainer" containerID="c5c0e9b1976351be2cd7bda5d158ba864d20867fe036b05ce105f84fe89b2237" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.455363 4775 generic.go:334] "Generic (PLEG): container finished" podID="bf62a13f-3710-46a8-841f-26c01081361d" containerID="a2b57c20392b716a2c25b7324a36191185587e97b2a150befa00273d5925fc77" exitCode=2 Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.455419 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf62a13f-3710-46a8-841f-26c01081361d","Type":"ContainerDied","Data":"a2b57c20392b716a2c25b7324a36191185587e97b2a150befa00273d5925fc77"} Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.470044 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a38d394-4c5f-4d60-92af-3407e58769da","Type":"ContainerStarted","Data":"c4d0859f80a8541b9842a0862b1e8a3a5d1253d9874f4a00f73c8ad9eaf53fe7"} Nov 25 19:54:10 crc kubenswrapper[4775]: W1125 19:54:10.487740 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod402fddde_9ee1_4d1f_906c_1ca8379f3fa2.slice/crio-57b6a36ec5e6c7d97cb6dea8d4a9ff6a6a6f0e8220712a03685a892f6de01d49 WatchSource:0}: Error finding container 57b6a36ec5e6c7d97cb6dea8d4a9ff6a6a6f0e8220712a03685a892f6de01d49: Status 404 returned error can't find the container with id 57b6a36ec5e6c7d97cb6dea8d4a9ff6a6a6f0e8220712a03685a892f6de01d49 Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.493683 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-bn57q"] Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.502613 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-bn57q"] Nov 25 19:54:10 crc kubenswrapper[4775]: W1125 19:54:10.513016 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ae35de2_6d67_497a_840a_bc88ddf16205.slice/crio-5da7ff240fac9a046132e861e48a850c45728ee1a82385160e18d1990b56258c WatchSource:0}: Error finding container 5da7ff240fac9a046132e861e48a850c45728ee1a82385160e18d1990b56258c: Status 404 returned error can't find the container with id 5da7ff240fac9a046132e861e48a850c45728ee1a82385160e18d1990b56258c Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.514187 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.524697 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.546497 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e2c2e7-000b-4f8a-a064-209cd6036632-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"64e2c2e7-000b-4f8a-a064-209cd6036632\") " pod="openstack/nova-cell1-conductor-0" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.546612 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64e2c2e7-000b-4f8a-a064-209cd6036632-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"64e2c2e7-000b-4f8a-a064-209cd6036632\") " pod="openstack/nova-cell1-conductor-0" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.546689 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hcxg\" (UniqueName: \"kubernetes.io/projected/64e2c2e7-000b-4f8a-a064-209cd6036632-kube-api-access-9hcxg\") pod \"nova-cell1-conductor-0\" (UID: \"64e2c2e7-000b-4f8a-a064-209cd6036632\") " pod="openstack/nova-cell1-conductor-0" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.647270 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2dxt\" (UniqueName: \"kubernetes.io/projected/a7ee223f-18f4-4136-87e1-3074d439352d-kube-api-access-q2dxt\") pod \"a7ee223f-18f4-4136-87e1-3074d439352d\" (UID: \"a7ee223f-18f4-4136-87e1-3074d439352d\") " Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.647471 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ee223f-18f4-4136-87e1-3074d439352d-config-data\") pod \"a7ee223f-18f4-4136-87e1-3074d439352d\" (UID: \"a7ee223f-18f4-4136-87e1-3074d439352d\") " Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.647565 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ee223f-18f4-4136-87e1-3074d439352d-combined-ca-bundle\") pod \"a7ee223f-18f4-4136-87e1-3074d439352d\" (UID: \"a7ee223f-18f4-4136-87e1-3074d439352d\") " Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.647808 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64e2c2e7-000b-4f8a-a064-209cd6036632-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"64e2c2e7-000b-4f8a-a064-209cd6036632\") " pod="openstack/nova-cell1-conductor-0" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.647874 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hcxg\" (UniqueName: \"kubernetes.io/projected/64e2c2e7-000b-4f8a-a064-209cd6036632-kube-api-access-9hcxg\") pod \"nova-cell1-conductor-0\" (UID: \"64e2c2e7-000b-4f8a-a064-209cd6036632\") " pod="openstack/nova-cell1-conductor-0" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.647971 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e2c2e7-000b-4f8a-a064-209cd6036632-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"64e2c2e7-000b-4f8a-a064-209cd6036632\") " pod="openstack/nova-cell1-conductor-0" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.651811 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ee223f-18f4-4136-87e1-3074d439352d-kube-api-access-q2dxt" (OuterVolumeSpecName: "kube-api-access-q2dxt") pod "a7ee223f-18f4-4136-87e1-3074d439352d" (UID: "a7ee223f-18f4-4136-87e1-3074d439352d"). InnerVolumeSpecName "kube-api-access-q2dxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.652296 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64e2c2e7-000b-4f8a-a064-209cd6036632-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"64e2c2e7-000b-4f8a-a064-209cd6036632\") " pod="openstack/nova-cell1-conductor-0" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.656587 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e2c2e7-000b-4f8a-a064-209cd6036632-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"64e2c2e7-000b-4f8a-a064-209cd6036632\") " pod="openstack/nova-cell1-conductor-0" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.678627 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hcxg\" (UniqueName: \"kubernetes.io/projected/64e2c2e7-000b-4f8a-a064-209cd6036632-kube-api-access-9hcxg\") pod \"nova-cell1-conductor-0\" (UID: \"64e2c2e7-000b-4f8a-a064-209cd6036632\") " pod="openstack/nova-cell1-conductor-0" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.681705 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ee223f-18f4-4136-87e1-3074d439352d-config-data" (OuterVolumeSpecName: "config-data") pod "a7ee223f-18f4-4136-87e1-3074d439352d" (UID: "a7ee223f-18f4-4136-87e1-3074d439352d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.688509 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ee223f-18f4-4136-87e1-3074d439352d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7ee223f-18f4-4136-87e1-3074d439352d" (UID: "a7ee223f-18f4-4136-87e1-3074d439352d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.748288 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.749410 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2dxt\" (UniqueName: \"kubernetes.io/projected/a7ee223f-18f4-4136-87e1-3074d439352d-kube-api-access-q2dxt\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.749439 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ee223f-18f4-4136-87e1-3074d439352d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.749450 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ee223f-18f4-4136-87e1-3074d439352d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.865416 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b6ac464-ee79-41a6-8977-0db9e5044ee9" path="/var/lib/kubelet/pods/6b6ac464-ee79-41a6-8977-0db9e5044ee9/volumes" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.866143 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa5215c2-c105-433b-b02f-be661535774c" path="/var/lib/kubelet/pods/aa5215c2-c105-433b-b02f-be661535774c/volumes" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.869676 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2f763f2-c489-42ba-af44-450243c60955" path="/var/lib/kubelet/pods/b2f763f2-c489-42ba-af44-450243c60955/volumes" Nov 25 19:54:10 crc kubenswrapper[4775]: I1125 19:54:10.870565 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc7ecda8-08e1-48d5-9695-6b2a47c08755" path="/var/lib/kubelet/pods/fc7ecda8-08e1-48d5-9695-6b2a47c08755/volumes" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.206789 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 19:54:11 crc kubenswrapper[4775]: W1125 19:54:11.214174 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64e2c2e7_000b_4f8a_a064_209cd6036632.slice/crio-a4d1f17eeff8258f3a07a0a9eda91498f3414961de8423ca288ced54db01f8e6 WatchSource:0}: Error finding container a4d1f17eeff8258f3a07a0a9eda91498f3414961de8423ca288ced54db01f8e6: Status 404 returned error can't find the container with id a4d1f17eeff8258f3a07a0a9eda91498f3414961de8423ca288ced54db01f8e6 Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.480877 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"402fddde-9ee1-4d1f-906c-1ca8379f3fa2","Type":"ContainerStarted","Data":"e68ca80ba7c1a4ef6ed3bba7c058b5c5daf5d0f4acfc6924b90e351cce7a510f"} Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.481268 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"402fddde-9ee1-4d1f-906c-1ca8379f3fa2","Type":"ContainerStarted","Data":"24b00989c27a9fa294d2e00c57d91b28ebeff3339181270ee41382eee7fb3ee6"} Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.481283 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"402fddde-9ee1-4d1f-906c-1ca8379f3fa2","Type":"ContainerStarted","Data":"57b6a36ec5e6c7d97cb6dea8d4a9ff6a6a6f0e8220712a03685a892f6de01d49"} Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.484285 4775 generic.go:334] "Generic (PLEG): container finished" podID="bf62a13f-3710-46a8-841f-26c01081361d" containerID="650843ed79aabc4dd7fc04e26741791d0fa0f974117f6fb2f172f5c24f0045fe" exitCode=0 Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.484314 4775 generic.go:334] "Generic (PLEG): container finished" podID="bf62a13f-3710-46a8-841f-26c01081361d" containerID="19f5f85b9f176b28bce660ab1cc4c3c45a45ec4a35bcaa573f249f3660f2915d" exitCode=0 Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.484349 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf62a13f-3710-46a8-841f-26c01081361d","Type":"ContainerDied","Data":"650843ed79aabc4dd7fc04e26741791d0fa0f974117f6fb2f172f5c24f0045fe"} Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.484849 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf62a13f-3710-46a8-841f-26c01081361d","Type":"ContainerDied","Data":"19f5f85b9f176b28bce660ab1cc4c3c45a45ec4a35bcaa573f249f3660f2915d"} Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.486796 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a38d394-4c5f-4d60-92af-3407e58769da","Type":"ContainerStarted","Data":"ca70a7c43721ca478398eb97160ebb64c49f7b8698965395cefa29e5fe7fe40a"} Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.486909 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.489386 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7ee223f-18f4-4136-87e1-3074d439352d","Type":"ContainerDied","Data":"981c6a1fd5d970d8d4fee9a36e02e810bc07251f635f3a99e67a7386c61084c3"} Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.489447 4775 scope.go:117] "RemoveContainer" containerID="cd5237a60c2b6325119698e2f90f63501a81854461127cdf5e7494943612f649" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.489472 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.497241 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ae35de2-6d67-497a-840a-bc88ddf16205","Type":"ContainerStarted","Data":"bab433a44a91d00b48c7db0ea978538ddecf5916dfbd71ec8f05cfa1024e5054"} Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.497288 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ae35de2-6d67-497a-840a-bc88ddf16205","Type":"ContainerStarted","Data":"f129e6fb2569b9211c30e0efaeb39b2cd1e17cc22d5bf1ca29c3fe367a9c5641"} Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.497301 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ae35de2-6d67-497a-840a-bc88ddf16205","Type":"ContainerStarted","Data":"5da7ff240fac9a046132e861e48a850c45728ee1a82385160e18d1990b56258c"} Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.504724 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"64e2c2e7-000b-4f8a-a064-209cd6036632","Type":"ContainerStarted","Data":"29e372852f9d8ee0865963b617ec7767d2c28de319cf9961c7885adec4e675b3"} Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.504779 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"64e2c2e7-000b-4f8a-a064-209cd6036632","Type":"ContainerStarted","Data":"a4d1f17eeff8258f3a07a0a9eda91498f3414961de8423ca288ced54db01f8e6"} Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.505159 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.513067 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.513046913 podStartE2EDuration="2.513046913s" podCreationTimestamp="2025-11-25 19:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:11.510310659 +0000 UTC m=+1233.426673065" watchObservedRunningTime="2025-11-25 19:54:11.513046913 +0000 UTC m=+1233.429409279" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.544672 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.565525 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.577494 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.579480 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.123723557 podStartE2EDuration="2.579463586s" podCreationTimestamp="2025-11-25 19:54:09 +0000 UTC" firstStartedPulling="2025-11-25 19:54:10.239113822 +0000 UTC m=+1232.155476188" lastFinishedPulling="2025-11-25 19:54:10.694853851 +0000 UTC m=+1232.611216217" observedRunningTime="2025-11-25 19:54:11.54833991 +0000 UTC m=+1233.464702276" watchObservedRunningTime="2025-11-25 19:54:11.579463586 +0000 UTC m=+1233.495825962" Nov 25 19:54:11 crc kubenswrapper[4775]: E1125 19:54:11.592809 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ee223f-18f4-4136-87e1-3074d439352d" containerName="nova-scheduler-scheduler" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.592857 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ee223f-18f4-4136-87e1-3074d439352d" containerName="nova-scheduler-scheduler" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.593271 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ee223f-18f4-4136-87e1-3074d439352d" containerName="nova-scheduler-scheduler" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.594037 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.594131 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.596909 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.600548 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.600532103 podStartE2EDuration="1.600532103s" podCreationTimestamp="2025-11-25 19:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:11.569978232 +0000 UTC m=+1233.486340608" watchObservedRunningTime="2025-11-25 19:54:11.600532103 +0000 UTC m=+1233.516894469" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.609269 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.609253877 podStartE2EDuration="2.609253877s" podCreationTimestamp="2025-11-25 19:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:11.589117136 +0000 UTC m=+1233.505479522" watchObservedRunningTime="2025-11-25 19:54:11.609253877 +0000 UTC m=+1233.525616243" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.767076 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a9d6a2-9240-4045-bafb-524ed57408bc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0a9d6a2-9240-4045-bafb-524ed57408bc\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.767269 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdgll\" (UniqueName: \"kubernetes.io/projected/a0a9d6a2-9240-4045-bafb-524ed57408bc-kube-api-access-tdgll\") pod \"nova-scheduler-0\" (UID: \"a0a9d6a2-9240-4045-bafb-524ed57408bc\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.767305 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a9d6a2-9240-4045-bafb-524ed57408bc-config-data\") pod \"nova-scheduler-0\" (UID: \"a0a9d6a2-9240-4045-bafb-524ed57408bc\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.869286 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdgll\" (UniqueName: \"kubernetes.io/projected/a0a9d6a2-9240-4045-bafb-524ed57408bc-kube-api-access-tdgll\") pod \"nova-scheduler-0\" (UID: \"a0a9d6a2-9240-4045-bafb-524ed57408bc\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.869328 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a9d6a2-9240-4045-bafb-524ed57408bc-config-data\") pod \"nova-scheduler-0\" (UID: \"a0a9d6a2-9240-4045-bafb-524ed57408bc\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.869425 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a9d6a2-9240-4045-bafb-524ed57408bc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0a9d6a2-9240-4045-bafb-524ed57408bc\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.874104 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a9d6a2-9240-4045-bafb-524ed57408bc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0a9d6a2-9240-4045-bafb-524ed57408bc\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.884902 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a9d6a2-9240-4045-bafb-524ed57408bc-config-data\") pod \"nova-scheduler-0\" (UID: \"a0a9d6a2-9240-4045-bafb-524ed57408bc\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.885691 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdgll\" (UniqueName: \"kubernetes.io/projected/a0a9d6a2-9240-4045-bafb-524ed57408bc-kube-api-access-tdgll\") pod \"nova-scheduler-0\" (UID: \"a0a9d6a2-9240-4045-bafb-524ed57408bc\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:11 crc kubenswrapper[4775]: I1125 19:54:11.924081 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 19:54:12 crc kubenswrapper[4775]: I1125 19:54:12.348901 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:54:12 crc kubenswrapper[4775]: W1125 19:54:12.355885 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0a9d6a2_9240_4045_bafb_524ed57408bc.slice/crio-84d6e9aae2fbd79679fcf4fcd08afea623c4053e4de3f07f1aec3265667ceff5 WatchSource:0}: Error finding container 84d6e9aae2fbd79679fcf4fcd08afea623c4053e4de3f07f1aec3265667ceff5: Status 404 returned error can't find the container with id 84d6e9aae2fbd79679fcf4fcd08afea623c4053e4de3f07f1aec3265667ceff5 Nov 25 19:54:12 crc kubenswrapper[4775]: I1125 19:54:12.522872 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0a9d6a2-9240-4045-bafb-524ed57408bc","Type":"ContainerStarted","Data":"84d6e9aae2fbd79679fcf4fcd08afea623c4053e4de3f07f1aec3265667ceff5"} Nov 25 19:54:12 crc kubenswrapper[4775]: I1125 19:54:12.861122 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ee223f-18f4-4136-87e1-3074d439352d" path="/var/lib/kubelet/pods/a7ee223f-18f4-4136-87e1-3074d439352d/volumes" Nov 25 19:54:13 crc kubenswrapper[4775]: I1125 19:54:13.538205 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0a9d6a2-9240-4045-bafb-524ed57408bc","Type":"ContainerStarted","Data":"dc843060acfc7d8b539048b6937de4783099b49d7a27c88f2f1acc74a4dd3ebe"} Nov 25 19:54:13 crc kubenswrapper[4775]: I1125 19:54:13.570223 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.570190637 podStartE2EDuration="2.570190637s" podCreationTimestamp="2025-11-25 19:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:13.563455775 +0000 UTC m=+1235.479818141" watchObservedRunningTime="2025-11-25 19:54:13.570190637 +0000 UTC m=+1235.486553083" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.305016 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.414233 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnq9w\" (UniqueName: \"kubernetes.io/projected/bf62a13f-3710-46a8-841f-26c01081361d-kube-api-access-nnq9w\") pod \"bf62a13f-3710-46a8-841f-26c01081361d\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.414298 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-sg-core-conf-yaml\") pod \"bf62a13f-3710-46a8-841f-26c01081361d\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.414351 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-combined-ca-bundle\") pod \"bf62a13f-3710-46a8-841f-26c01081361d\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.414398 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-scripts\") pod \"bf62a13f-3710-46a8-841f-26c01081361d\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.414435 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-config-data\") pod \"bf62a13f-3710-46a8-841f-26c01081361d\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.414476 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf62a13f-3710-46a8-841f-26c01081361d-run-httpd\") pod \"bf62a13f-3710-46a8-841f-26c01081361d\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.414630 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf62a13f-3710-46a8-841f-26c01081361d-log-httpd\") pod \"bf62a13f-3710-46a8-841f-26c01081361d\" (UID: \"bf62a13f-3710-46a8-841f-26c01081361d\") " Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.415479 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf62a13f-3710-46a8-841f-26c01081361d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bf62a13f-3710-46a8-841f-26c01081361d" (UID: "bf62a13f-3710-46a8-841f-26c01081361d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.415498 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf62a13f-3710-46a8-841f-26c01081361d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bf62a13f-3710-46a8-841f-26c01081361d" (UID: "bf62a13f-3710-46a8-841f-26c01081361d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.421195 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf62a13f-3710-46a8-841f-26c01081361d-kube-api-access-nnq9w" (OuterVolumeSpecName: "kube-api-access-nnq9w") pod "bf62a13f-3710-46a8-841f-26c01081361d" (UID: "bf62a13f-3710-46a8-841f-26c01081361d"). InnerVolumeSpecName "kube-api-access-nnq9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.436198 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-scripts" (OuterVolumeSpecName: "scripts") pod "bf62a13f-3710-46a8-841f-26c01081361d" (UID: "bf62a13f-3710-46a8-841f-26c01081361d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.461745 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bf62a13f-3710-46a8-841f-26c01081361d" (UID: "bf62a13f-3710-46a8-841f-26c01081361d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.516554 4775 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf62a13f-3710-46a8-841f-26c01081361d-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.516581 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnq9w\" (UniqueName: \"kubernetes.io/projected/bf62a13f-3710-46a8-841f-26c01081361d-kube-api-access-nnq9w\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.516594 4775 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.516603 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.516611 4775 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf62a13f-3710-46a8-841f-26c01081361d-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.519162 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf62a13f-3710-46a8-841f-26c01081361d" (UID: "bf62a13f-3710-46a8-841f-26c01081361d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.530833 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-config-data" (OuterVolumeSpecName: "config-data") pod "bf62a13f-3710-46a8-841f-26c01081361d" (UID: "bf62a13f-3710-46a8-841f-26c01081361d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.550171 4775 generic.go:334] "Generic (PLEG): container finished" podID="bf62a13f-3710-46a8-841f-26c01081361d" containerID="53db6a4ee8a22ff48c990f900f56c37fb3e1dfcf67d9dc3298b4dbd1f64ff6c4" exitCode=0 Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.550292 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.550343 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf62a13f-3710-46a8-841f-26c01081361d","Type":"ContainerDied","Data":"53db6a4ee8a22ff48c990f900f56c37fb3e1dfcf67d9dc3298b4dbd1f64ff6c4"} Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.550370 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf62a13f-3710-46a8-841f-26c01081361d","Type":"ContainerDied","Data":"b2ca78eb28d70b0ac21a19fd31dca742b9fc94ed2dd443af4725b2ae189e3eec"} Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.550385 4775 scope.go:117] "RemoveContainer" containerID="650843ed79aabc4dd7fc04e26741791d0fa0f974117f6fb2f172f5c24f0045fe" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.578010 4775 scope.go:117] "RemoveContainer" containerID="a2b57c20392b716a2c25b7324a36191185587e97b2a150befa00273d5925fc77" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.585684 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.593825 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.604569 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:54:14 crc kubenswrapper[4775]: E1125 19:54:14.604929 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="ceilometer-notification-agent" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.604944 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="ceilometer-notification-agent" Nov 25 19:54:14 crc kubenswrapper[4775]: E1125 19:54:14.604963 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="proxy-httpd" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.604970 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="proxy-httpd" Nov 25 19:54:14 crc kubenswrapper[4775]: E1125 19:54:14.604980 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="ceilometer-central-agent" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.604986 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="ceilometer-central-agent" Nov 25 19:54:14 crc kubenswrapper[4775]: E1125 19:54:14.605004 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="sg-core" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.605010 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="sg-core" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.605165 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="ceilometer-notification-agent" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.605183 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="proxy-httpd" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.605194 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="sg-core" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.605206 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf62a13f-3710-46a8-841f-26c01081361d" containerName="ceilometer-central-agent" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.606756 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.609381 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.614110 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.614520 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.618881 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.618986 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf62a13f-3710-46a8-841f-26c01081361d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.623143 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.642730 4775 scope.go:117] "RemoveContainer" containerID="53db6a4ee8a22ff48c990f900f56c37fb3e1dfcf67d9dc3298b4dbd1f64ff6c4" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.663732 4775 scope.go:117] "RemoveContainer" containerID="19f5f85b9f176b28bce660ab1cc4c3c45a45ec4a35bcaa573f249f3660f2915d" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.683241 4775 scope.go:117] "RemoveContainer" containerID="650843ed79aabc4dd7fc04e26741791d0fa0f974117f6fb2f172f5c24f0045fe" Nov 25 19:54:14 crc kubenswrapper[4775]: E1125 19:54:14.683672 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"650843ed79aabc4dd7fc04e26741791d0fa0f974117f6fb2f172f5c24f0045fe\": container with ID starting with 650843ed79aabc4dd7fc04e26741791d0fa0f974117f6fb2f172f5c24f0045fe not found: ID does not exist" containerID="650843ed79aabc4dd7fc04e26741791d0fa0f974117f6fb2f172f5c24f0045fe" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.683721 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650843ed79aabc4dd7fc04e26741791d0fa0f974117f6fb2f172f5c24f0045fe"} err="failed to get container status \"650843ed79aabc4dd7fc04e26741791d0fa0f974117f6fb2f172f5c24f0045fe\": rpc error: code = NotFound desc = could not find container \"650843ed79aabc4dd7fc04e26741791d0fa0f974117f6fb2f172f5c24f0045fe\": container with ID starting with 650843ed79aabc4dd7fc04e26741791d0fa0f974117f6fb2f172f5c24f0045fe not found: ID does not exist" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.683752 4775 scope.go:117] "RemoveContainer" containerID="a2b57c20392b716a2c25b7324a36191185587e97b2a150befa00273d5925fc77" Nov 25 19:54:14 crc kubenswrapper[4775]: E1125 19:54:14.684170 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2b57c20392b716a2c25b7324a36191185587e97b2a150befa00273d5925fc77\": container with ID starting with a2b57c20392b716a2c25b7324a36191185587e97b2a150befa00273d5925fc77 not found: ID does not exist" containerID="a2b57c20392b716a2c25b7324a36191185587e97b2a150befa00273d5925fc77" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.684196 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2b57c20392b716a2c25b7324a36191185587e97b2a150befa00273d5925fc77"} err="failed to get container status \"a2b57c20392b716a2c25b7324a36191185587e97b2a150befa00273d5925fc77\": rpc error: code = NotFound desc = could not find container \"a2b57c20392b716a2c25b7324a36191185587e97b2a150befa00273d5925fc77\": container with ID starting with a2b57c20392b716a2c25b7324a36191185587e97b2a150befa00273d5925fc77 not found: ID does not exist" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.684215 4775 scope.go:117] "RemoveContainer" containerID="53db6a4ee8a22ff48c990f900f56c37fb3e1dfcf67d9dc3298b4dbd1f64ff6c4" Nov 25 19:54:14 crc kubenswrapper[4775]: E1125 19:54:14.684488 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53db6a4ee8a22ff48c990f900f56c37fb3e1dfcf67d9dc3298b4dbd1f64ff6c4\": container with ID starting with 53db6a4ee8a22ff48c990f900f56c37fb3e1dfcf67d9dc3298b4dbd1f64ff6c4 not found: ID does not exist" containerID="53db6a4ee8a22ff48c990f900f56c37fb3e1dfcf67d9dc3298b4dbd1f64ff6c4" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.684513 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53db6a4ee8a22ff48c990f900f56c37fb3e1dfcf67d9dc3298b4dbd1f64ff6c4"} err="failed to get container status \"53db6a4ee8a22ff48c990f900f56c37fb3e1dfcf67d9dc3298b4dbd1f64ff6c4\": rpc error: code = NotFound desc = could not find container \"53db6a4ee8a22ff48c990f900f56c37fb3e1dfcf67d9dc3298b4dbd1f64ff6c4\": container with ID starting with 53db6a4ee8a22ff48c990f900f56c37fb3e1dfcf67d9dc3298b4dbd1f64ff6c4 not found: ID does not exist" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.684530 4775 scope.go:117] "RemoveContainer" containerID="19f5f85b9f176b28bce660ab1cc4c3c45a45ec4a35bcaa573f249f3660f2915d" Nov 25 19:54:14 crc kubenswrapper[4775]: E1125 19:54:14.684797 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f5f85b9f176b28bce660ab1cc4c3c45a45ec4a35bcaa573f249f3660f2915d\": container with ID starting with 19f5f85b9f176b28bce660ab1cc4c3c45a45ec4a35bcaa573f249f3660f2915d not found: ID does not exist" containerID="19f5f85b9f176b28bce660ab1cc4c3c45a45ec4a35bcaa573f249f3660f2915d" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.684821 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f5f85b9f176b28bce660ab1cc4c3c45a45ec4a35bcaa573f249f3660f2915d"} err="failed to get container status \"19f5f85b9f176b28bce660ab1cc4c3c45a45ec4a35bcaa573f249f3660f2915d\": rpc error: code = NotFound desc = could not find container \"19f5f85b9f176b28bce660ab1cc4c3c45a45ec4a35bcaa573f249f3660f2915d\": container with ID starting with 19f5f85b9f176b28bce660ab1cc4c3c45a45ec4a35bcaa573f249f3660f2915d not found: ID does not exist" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.720272 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.720537 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.720675 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.720811 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-run-httpd\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.720887 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtx7l\" (UniqueName: \"kubernetes.io/projected/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-kube-api-access-qtx7l\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.720909 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-config-data\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.720931 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-log-httpd\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.721024 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-scripts\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.823072 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-scripts\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.823180 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.823276 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.823329 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.823372 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-run-httpd\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.823450 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtx7l\" (UniqueName: \"kubernetes.io/projected/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-kube-api-access-qtx7l\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.823480 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-config-data\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.823512 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-log-httpd\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.824258 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-log-httpd\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.824391 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-run-httpd\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.829366 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.829929 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.830619 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-scripts\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.831271 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-config-data\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.831857 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.853501 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtx7l\" (UniqueName: \"kubernetes.io/projected/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-kube-api-access-qtx7l\") pod \"ceilometer-0\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " pod="openstack/ceilometer-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.866840 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf62a13f-3710-46a8-841f-26c01081361d" path="/var/lib/kubelet/pods/bf62a13f-3710-46a8-841f-26c01081361d/volumes" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.901453 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.901533 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 19:54:14 crc kubenswrapper[4775]: I1125 19:54:14.930444 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:54:15 crc kubenswrapper[4775]: I1125 19:54:15.412374 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:54:15 crc kubenswrapper[4775]: I1125 19:54:15.557469 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe023d49-e66b-4643-b2c8-de9cdfee1c0f","Type":"ContainerStarted","Data":"14953646e914c353b9443964e51d4eb57df7ca4c328bdb4dd26b2c24e564e7bf"} Nov 25 19:54:16 crc kubenswrapper[4775]: I1125 19:54:16.591184 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe023d49-e66b-4643-b2c8-de9cdfee1c0f","Type":"ContainerStarted","Data":"38aaed0bde9bb2d48d8ec4b4efeea20af2b7759ed4d20c10670d75cda2b8c87a"} Nov 25 19:54:16 crc kubenswrapper[4775]: I1125 19:54:16.925520 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 19:54:17 crc kubenswrapper[4775]: I1125 19:54:17.609595 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe023d49-e66b-4643-b2c8-de9cdfee1c0f","Type":"ContainerStarted","Data":"6ec847fd8830de0db9f16e31da1c18d43f271a59920d7ced65ab05ea6b97ab05"} Nov 25 19:54:18 crc kubenswrapper[4775]: I1125 19:54:18.624076 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe023d49-e66b-4643-b2c8-de9cdfee1c0f","Type":"ContainerStarted","Data":"f84a3439133f23cf78c637093c7250681f3bef560761048786c87c29c6724b5f"} Nov 25 19:54:19 crc kubenswrapper[4775]: I1125 19:54:19.636339 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe023d49-e66b-4643-b2c8-de9cdfee1c0f","Type":"ContainerStarted","Data":"eb7a5dfdc26fcf7e335f3efd93c43a8244bac17a21e78571e7107f35c2e61936"} Nov 25 19:54:19 crc kubenswrapper[4775]: I1125 19:54:19.636874 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 19:54:19 crc kubenswrapper[4775]: I1125 19:54:19.779310 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 19:54:19 crc kubenswrapper[4775]: I1125 19:54:19.806309 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.42809474 podStartE2EDuration="5.806283715s" podCreationTimestamp="2025-11-25 19:54:14 +0000 UTC" firstStartedPulling="2025-11-25 19:54:15.424302265 +0000 UTC m=+1237.340664631" lastFinishedPulling="2025-11-25 19:54:18.8024912 +0000 UTC m=+1240.718853606" observedRunningTime="2025-11-25 19:54:19.661270642 +0000 UTC m=+1241.577633018" watchObservedRunningTime="2025-11-25 19:54:19.806283715 +0000 UTC m=+1241.722646071" Nov 25 19:54:19 crc kubenswrapper[4775]: I1125 19:54:19.901121 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 19:54:19 crc kubenswrapper[4775]: I1125 19:54:19.901459 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 19:54:19 crc kubenswrapper[4775]: I1125 19:54:19.940464 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 19:54:19 crc kubenswrapper[4775]: I1125 19:54:19.940525 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 19:54:20 crc kubenswrapper[4775]: I1125 19:54:20.801057 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 25 19:54:20 crc kubenswrapper[4775]: I1125 19:54:20.917903 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.173:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 19:54:20 crc kubenswrapper[4775]: I1125 19:54:20.917910 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.173:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 19:54:21 crc kubenswrapper[4775]: I1125 19:54:21.024877 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2ae35de2-6d67-497a-840a-bc88ddf16205" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.174:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 19:54:21 crc kubenswrapper[4775]: I1125 19:54:21.024888 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2ae35de2-6d67-497a-840a-bc88ddf16205" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.174:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 19:54:21 crc kubenswrapper[4775]: I1125 19:54:21.926154 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 19:54:21 crc kubenswrapper[4775]: I1125 19:54:21.970266 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 19:54:22 crc kubenswrapper[4775]: I1125 19:54:22.723254 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 19:54:29 crc kubenswrapper[4775]: I1125 19:54:29.918788 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 19:54:29 crc kubenswrapper[4775]: I1125 19:54:29.923112 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 19:54:29 crc kubenswrapper[4775]: I1125 19:54:29.927377 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 19:54:29 crc kubenswrapper[4775]: I1125 19:54:29.946356 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 19:54:29 crc kubenswrapper[4775]: I1125 19:54:29.946864 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 19:54:29 crc kubenswrapper[4775]: I1125 19:54:29.957986 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 19:54:29 crc kubenswrapper[4775]: I1125 19:54:29.967584 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 19:54:30 crc kubenswrapper[4775]: I1125 19:54:30.765401 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 19:54:30 crc kubenswrapper[4775]: I1125 19:54:30.771960 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 19:54:30 crc kubenswrapper[4775]: I1125 19:54:30.792587 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.075183 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-tnbk6"] Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.077441 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.094073 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-tnbk6"] Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.204527 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-config\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.204571 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.204596 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.204633 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45ldb\" (UniqueName: \"kubernetes.io/projected/e38a12cb-af03-43cf-97fb-05f2e5364c82-kube-api-access-45ldb\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.204813 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-dns-svc\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.305918 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-config\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.305964 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.305991 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.306026 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45ldb\" (UniqueName: \"kubernetes.io/projected/e38a12cb-af03-43cf-97fb-05f2e5364c82-kube-api-access-45ldb\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.306079 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-dns-svc\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.307217 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.307346 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.307514 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-config\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.307711 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-dns-svc\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.325604 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45ldb\" (UniqueName: \"kubernetes.io/projected/e38a12cb-af03-43cf-97fb-05f2e5364c82-kube-api-access-45ldb\") pod \"dnsmasq-dns-5b856c5697-tnbk6\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.398114 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:31 crc kubenswrapper[4775]: I1125 19:54:31.903545 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-tnbk6"] Nov 25 19:54:32 crc kubenswrapper[4775]: I1125 19:54:32.781615 4775 generic.go:334] "Generic (PLEG): container finished" podID="e38a12cb-af03-43cf-97fb-05f2e5364c82" containerID="9b4d55c4d54ec3a4d786e4fcc4765367d17a0ec3147e880d7ffa78cbd5d1c884" exitCode=0 Nov 25 19:54:32 crc kubenswrapper[4775]: I1125 19:54:32.781678 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" event={"ID":"e38a12cb-af03-43cf-97fb-05f2e5364c82","Type":"ContainerDied","Data":"9b4d55c4d54ec3a4d786e4fcc4765367d17a0ec3147e880d7ffa78cbd5d1c884"} Nov 25 19:54:32 crc kubenswrapper[4775]: I1125 19:54:32.782786 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" event={"ID":"e38a12cb-af03-43cf-97fb-05f2e5364c82","Type":"ContainerStarted","Data":"91eb258b60e43c785990e7504f52e189239a6d14a2bfd09d8c1491233cf1f575"} Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.137854 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.138201 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="ceilometer-central-agent" containerID="cri-o://38aaed0bde9bb2d48d8ec4b4efeea20af2b7759ed4d20c10670d75cda2b8c87a" gracePeriod=30 Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.138324 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="proxy-httpd" containerID="cri-o://eb7a5dfdc26fcf7e335f3efd93c43a8244bac17a21e78571e7107f35c2e61936" gracePeriod=30 Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.138377 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="ceilometer-notification-agent" containerID="cri-o://6ec847fd8830de0db9f16e31da1c18d43f271a59920d7ced65ab05ea6b97ab05" gracePeriod=30 Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.138363 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="sg-core" containerID="cri-o://f84a3439133f23cf78c637093c7250681f3bef560761048786c87c29c6724b5f" gracePeriod=30 Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.151240 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.177:3000/\": EOF" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.373367 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.648872 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.745155 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3956a135-c94d-443b-9686-4b77db2e7df8-config-data\") pod \"3956a135-c94d-443b-9686-4b77db2e7df8\" (UID: \"3956a135-c94d-443b-9686-4b77db2e7df8\") " Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.745502 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sklbl\" (UniqueName: \"kubernetes.io/projected/3956a135-c94d-443b-9686-4b77db2e7df8-kube-api-access-sklbl\") pod \"3956a135-c94d-443b-9686-4b77db2e7df8\" (UID: \"3956a135-c94d-443b-9686-4b77db2e7df8\") " Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.745580 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3956a135-c94d-443b-9686-4b77db2e7df8-combined-ca-bundle\") pod \"3956a135-c94d-443b-9686-4b77db2e7df8\" (UID: \"3956a135-c94d-443b-9686-4b77db2e7df8\") " Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.765261 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3956a135-c94d-443b-9686-4b77db2e7df8-kube-api-access-sklbl" (OuterVolumeSpecName: "kube-api-access-sklbl") pod "3956a135-c94d-443b-9686-4b77db2e7df8" (UID: "3956a135-c94d-443b-9686-4b77db2e7df8"). InnerVolumeSpecName "kube-api-access-sklbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.773226 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3956a135-c94d-443b-9686-4b77db2e7df8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3956a135-c94d-443b-9686-4b77db2e7df8" (UID: "3956a135-c94d-443b-9686-4b77db2e7df8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.775622 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3956a135-c94d-443b-9686-4b77db2e7df8-config-data" (OuterVolumeSpecName: "config-data") pod "3956a135-c94d-443b-9686-4b77db2e7df8" (UID: "3956a135-c94d-443b-9686-4b77db2e7df8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.791267 4775 generic.go:334] "Generic (PLEG): container finished" podID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerID="eb7a5dfdc26fcf7e335f3efd93c43a8244bac17a21e78571e7107f35c2e61936" exitCode=0 Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.791301 4775 generic.go:334] "Generic (PLEG): container finished" podID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerID="f84a3439133f23cf78c637093c7250681f3bef560761048786c87c29c6724b5f" exitCode=2 Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.791311 4775 generic.go:334] "Generic (PLEG): container finished" podID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerID="38aaed0bde9bb2d48d8ec4b4efeea20af2b7759ed4d20c10670d75cda2b8c87a" exitCode=0 Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.791356 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe023d49-e66b-4643-b2c8-de9cdfee1c0f","Type":"ContainerDied","Data":"eb7a5dfdc26fcf7e335f3efd93c43a8244bac17a21e78571e7107f35c2e61936"} Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.791388 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe023d49-e66b-4643-b2c8-de9cdfee1c0f","Type":"ContainerDied","Data":"f84a3439133f23cf78c637093c7250681f3bef560761048786c87c29c6724b5f"} Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.791401 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe023d49-e66b-4643-b2c8-de9cdfee1c0f","Type":"ContainerDied","Data":"38aaed0bde9bb2d48d8ec4b4efeea20af2b7759ed4d20c10670d75cda2b8c87a"} Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.793517 4775 generic.go:334] "Generic (PLEG): container finished" podID="3956a135-c94d-443b-9686-4b77db2e7df8" containerID="d99cdb09ea8f0176603905d92d05e6bf8218f7812befe2983f6b7a4e466eaf2c" exitCode=137 Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.793587 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3956a135-c94d-443b-9686-4b77db2e7df8","Type":"ContainerDied","Data":"d99cdb09ea8f0176603905d92d05e6bf8218f7812befe2983f6b7a4e466eaf2c"} Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.793637 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3956a135-c94d-443b-9686-4b77db2e7df8","Type":"ContainerDied","Data":"68f694ff5da30e4f25f3508ac61bc6b47416f28ed67c8f56e1a2b64509afb0d3"} Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.793680 4775 scope.go:117] "RemoveContainer" containerID="d99cdb09ea8f0176603905d92d05e6bf8218f7812befe2983f6b7a4e466eaf2c" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.793831 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.800819 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" event={"ID":"e38a12cb-af03-43cf-97fb-05f2e5364c82","Type":"ContainerStarted","Data":"b057c412fb8f961c145d29bd42c78c4094acbb64e66201ddadd6b8ede3da5cca"} Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.800884 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2ae35de2-6d67-497a-840a-bc88ddf16205" containerName="nova-api-log" containerID="cri-o://f129e6fb2569b9211c30e0efaeb39b2cd1e17cc22d5bf1ca29c3fe367a9c5641" gracePeriod=30 Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.801008 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2ae35de2-6d67-497a-840a-bc88ddf16205" containerName="nova-api-api" containerID="cri-o://bab433a44a91d00b48c7db0ea978538ddecf5916dfbd71ec8f05cfa1024e5054" gracePeriod=30 Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.801223 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.825512 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" podStartSLOduration=2.825489979 podStartE2EDuration="2.825489979s" podCreationTimestamp="2025-11-25 19:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:33.823905706 +0000 UTC m=+1255.740268082" watchObservedRunningTime="2025-11-25 19:54:33.825489979 +0000 UTC m=+1255.741852365" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.848740 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3956a135-c94d-443b-9686-4b77db2e7df8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.848771 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sklbl\" (UniqueName: \"kubernetes.io/projected/3956a135-c94d-443b-9686-4b77db2e7df8-kube-api-access-sklbl\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.848782 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3956a135-c94d-443b-9686-4b77db2e7df8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.858837 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.860507 4775 scope.go:117] "RemoveContainer" containerID="d99cdb09ea8f0176603905d92d05e6bf8218f7812befe2983f6b7a4e466eaf2c" Nov 25 19:54:33 crc kubenswrapper[4775]: E1125 19:54:33.861043 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99cdb09ea8f0176603905d92d05e6bf8218f7812befe2983f6b7a4e466eaf2c\": container with ID starting with d99cdb09ea8f0176603905d92d05e6bf8218f7812befe2983f6b7a4e466eaf2c not found: ID does not exist" containerID="d99cdb09ea8f0176603905d92d05e6bf8218f7812befe2983f6b7a4e466eaf2c" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.861076 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99cdb09ea8f0176603905d92d05e6bf8218f7812befe2983f6b7a4e466eaf2c"} err="failed to get container status \"d99cdb09ea8f0176603905d92d05e6bf8218f7812befe2983f6b7a4e466eaf2c\": rpc error: code = NotFound desc = could not find container \"d99cdb09ea8f0176603905d92d05e6bf8218f7812befe2983f6b7a4e466eaf2c\": container with ID starting with d99cdb09ea8f0176603905d92d05e6bf8218f7812befe2983f6b7a4e466eaf2c not found: ID does not exist" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.872298 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.893589 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 19:54:33 crc kubenswrapper[4775]: E1125 19:54:33.894187 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3956a135-c94d-443b-9686-4b77db2e7df8" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.894206 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3956a135-c94d-443b-9686-4b77db2e7df8" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.894379 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3956a135-c94d-443b-9686-4b77db2e7df8" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.895025 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.900896 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.901256 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.901480 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 19:54:33 crc kubenswrapper[4775]: I1125 19:54:33.910456 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.051713 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec61d94-917a-4ddf-99c3-9a56d212ef64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.051824 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec61d94-917a-4ddf-99c3-9a56d212ef64-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.051858 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfgg7\" (UniqueName: \"kubernetes.io/projected/3ec61d94-917a-4ddf-99c3-9a56d212ef64-kube-api-access-mfgg7\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.051889 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec61d94-917a-4ddf-99c3-9a56d212ef64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.051921 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec61d94-917a-4ddf-99c3-9a56d212ef64-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.153444 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec61d94-917a-4ddf-99c3-9a56d212ef64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.153527 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec61d94-917a-4ddf-99c3-9a56d212ef64-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.153557 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfgg7\" (UniqueName: \"kubernetes.io/projected/3ec61d94-917a-4ddf-99c3-9a56d212ef64-kube-api-access-mfgg7\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.153586 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec61d94-917a-4ddf-99c3-9a56d212ef64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.153619 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec61d94-917a-4ddf-99c3-9a56d212ef64-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.158471 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec61d94-917a-4ddf-99c3-9a56d212ef64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.158992 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec61d94-917a-4ddf-99c3-9a56d212ef64-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.159417 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec61d94-917a-4ddf-99c3-9a56d212ef64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.163178 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec61d94-917a-4ddf-99c3-9a56d212ef64-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.174189 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfgg7\" (UniqueName: \"kubernetes.io/projected/3ec61d94-917a-4ddf-99c3-9a56d212ef64-kube-api-access-mfgg7\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ec61d94-917a-4ddf-99c3-9a56d212ef64\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.212230 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.648543 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 19:54:34 crc kubenswrapper[4775]: W1125 19:54:34.652121 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ec61d94_917a_4ddf_99c3_9a56d212ef64.slice/crio-1e3686f245a7a8e1c6951f12e271169105ed123b2217e29d304349392ca3a007 WatchSource:0}: Error finding container 1e3686f245a7a8e1c6951f12e271169105ed123b2217e29d304349392ca3a007: Status 404 returned error can't find the container with id 1e3686f245a7a8e1c6951f12e271169105ed123b2217e29d304349392ca3a007 Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.809588 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3ec61d94-917a-4ddf-99c3-9a56d212ef64","Type":"ContainerStarted","Data":"1e3686f245a7a8e1c6951f12e271169105ed123b2217e29d304349392ca3a007"} Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.811408 4775 generic.go:334] "Generic (PLEG): container finished" podID="2ae35de2-6d67-497a-840a-bc88ddf16205" containerID="f129e6fb2569b9211c30e0efaeb39b2cd1e17cc22d5bf1ca29c3fe367a9c5641" exitCode=143 Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.811479 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ae35de2-6d67-497a-840a-bc88ddf16205","Type":"ContainerDied","Data":"f129e6fb2569b9211c30e0efaeb39b2cd1e17cc22d5bf1ca29c3fe367a9c5641"} Nov 25 19:54:34 crc kubenswrapper[4775]: I1125 19:54:34.867138 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3956a135-c94d-443b-9686-4b77db2e7df8" path="/var/lib/kubelet/pods/3956a135-c94d-443b-9686-4b77db2e7df8/volumes" Nov 25 19:54:35 crc kubenswrapper[4775]: I1125 19:54:35.826763 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3ec61d94-917a-4ddf-99c3-9a56d212ef64","Type":"ContainerStarted","Data":"57628445d30b93799757bddd3e54703d0803fa485d0fdb391a04a292546de06c"} Nov 25 19:54:35 crc kubenswrapper[4775]: I1125 19:54:35.847115 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.8470990880000002 podStartE2EDuration="2.847099088s" podCreationTimestamp="2025-11-25 19:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:35.844811386 +0000 UTC m=+1257.761173792" watchObservedRunningTime="2025-11-25 19:54:35.847099088 +0000 UTC m=+1257.763461444" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.353949 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.442106 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.516628 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae35de2-6d67-497a-840a-bc88ddf16205-config-data\") pod \"2ae35de2-6d67-497a-840a-bc88ddf16205\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.516717 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-ceilometer-tls-certs\") pod \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.516755 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-log-httpd\") pod \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.516772 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-config-data\") pod \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.516797 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-run-httpd\") pod \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.516821 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae35de2-6d67-497a-840a-bc88ddf16205-combined-ca-bundle\") pod \"2ae35de2-6d67-497a-840a-bc88ddf16205\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.516876 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-combined-ca-bundle\") pod \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.516917 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ae35de2-6d67-497a-840a-bc88ddf16205-logs\") pod \"2ae35de2-6d67-497a-840a-bc88ddf16205\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.516966 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-scripts\") pod \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.516998 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh865\" (UniqueName: \"kubernetes.io/projected/2ae35de2-6d67-497a-840a-bc88ddf16205-kube-api-access-fh865\") pod \"2ae35de2-6d67-497a-840a-bc88ddf16205\" (UID: \"2ae35de2-6d67-497a-840a-bc88ddf16205\") " Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.517027 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-sg-core-conf-yaml\") pod \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.517085 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtx7l\" (UniqueName: \"kubernetes.io/projected/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-kube-api-access-qtx7l\") pod \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\" (UID: \"fe023d49-e66b-4643-b2c8-de9cdfee1c0f\") " Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.517288 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe023d49-e66b-4643-b2c8-de9cdfee1c0f" (UID: "fe023d49-e66b-4643-b2c8-de9cdfee1c0f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.517630 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ae35de2-6d67-497a-840a-bc88ddf16205-logs" (OuterVolumeSpecName: "logs") pod "2ae35de2-6d67-497a-840a-bc88ddf16205" (UID: "2ae35de2-6d67-497a-840a-bc88ddf16205"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.517806 4775 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.517833 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ae35de2-6d67-497a-840a-bc88ddf16205-logs\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.517893 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe023d49-e66b-4643-b2c8-de9cdfee1c0f" (UID: "fe023d49-e66b-4643-b2c8-de9cdfee1c0f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.522065 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-scripts" (OuterVolumeSpecName: "scripts") pod "fe023d49-e66b-4643-b2c8-de9cdfee1c0f" (UID: "fe023d49-e66b-4643-b2c8-de9cdfee1c0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.524149 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae35de2-6d67-497a-840a-bc88ddf16205-kube-api-access-fh865" (OuterVolumeSpecName: "kube-api-access-fh865") pod "2ae35de2-6d67-497a-840a-bc88ddf16205" (UID: "2ae35de2-6d67-497a-840a-bc88ddf16205"). InnerVolumeSpecName "kube-api-access-fh865". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.528621 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-kube-api-access-qtx7l" (OuterVolumeSpecName: "kube-api-access-qtx7l") pod "fe023d49-e66b-4643-b2c8-de9cdfee1c0f" (UID: "fe023d49-e66b-4643-b2c8-de9cdfee1c0f"). InnerVolumeSpecName "kube-api-access-qtx7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.555793 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae35de2-6d67-497a-840a-bc88ddf16205-config-data" (OuterVolumeSpecName: "config-data") pod "2ae35de2-6d67-497a-840a-bc88ddf16205" (UID: "2ae35de2-6d67-497a-840a-bc88ddf16205"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.555825 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae35de2-6d67-497a-840a-bc88ddf16205-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ae35de2-6d67-497a-840a-bc88ddf16205" (UID: "2ae35de2-6d67-497a-840a-bc88ddf16205"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.565480 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe023d49-e66b-4643-b2c8-de9cdfee1c0f" (UID: "fe023d49-e66b-4643-b2c8-de9cdfee1c0f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.588353 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fe023d49-e66b-4643-b2c8-de9cdfee1c0f" (UID: "fe023d49-e66b-4643-b2c8-de9cdfee1c0f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.596533 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe023d49-e66b-4643-b2c8-de9cdfee1c0f" (UID: "fe023d49-e66b-4643-b2c8-de9cdfee1c0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.619794 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.619823 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh865\" (UniqueName: \"kubernetes.io/projected/2ae35de2-6d67-497a-840a-bc88ddf16205-kube-api-access-fh865\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.619837 4775 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.619848 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtx7l\" (UniqueName: \"kubernetes.io/projected/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-kube-api-access-qtx7l\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.619857 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae35de2-6d67-497a-840a-bc88ddf16205-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.619866 4775 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.619875 4775 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.619883 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae35de2-6d67-497a-840a-bc88ddf16205-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.619890 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.627396 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-config-data" (OuterVolumeSpecName: "config-data") pod "fe023d49-e66b-4643-b2c8-de9cdfee1c0f" (UID: "fe023d49-e66b-4643-b2c8-de9cdfee1c0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.721251 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe023d49-e66b-4643-b2c8-de9cdfee1c0f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.851487 4775 generic.go:334] "Generic (PLEG): container finished" podID="2ae35de2-6d67-497a-840a-bc88ddf16205" containerID="bab433a44a91d00b48c7db0ea978538ddecf5916dfbd71ec8f05cfa1024e5054" exitCode=0 Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.851568 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ae35de2-6d67-497a-840a-bc88ddf16205","Type":"ContainerDied","Data":"bab433a44a91d00b48c7db0ea978538ddecf5916dfbd71ec8f05cfa1024e5054"} Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.851597 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ae35de2-6d67-497a-840a-bc88ddf16205","Type":"ContainerDied","Data":"5da7ff240fac9a046132e861e48a850c45728ee1a82385160e18d1990b56258c"} Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.851630 4775 scope.go:117] "RemoveContainer" containerID="bab433a44a91d00b48c7db0ea978538ddecf5916dfbd71ec8f05cfa1024e5054" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.852121 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.854915 4775 generic.go:334] "Generic (PLEG): container finished" podID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerID="6ec847fd8830de0db9f16e31da1c18d43f271a59920d7ced65ab05ea6b97ab05" exitCode=0 Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.854961 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe023d49-e66b-4643-b2c8-de9cdfee1c0f","Type":"ContainerDied","Data":"6ec847fd8830de0db9f16e31da1c18d43f271a59920d7ced65ab05ea6b97ab05"} Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.855012 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe023d49-e66b-4643-b2c8-de9cdfee1c0f","Type":"ContainerDied","Data":"14953646e914c353b9443964e51d4eb57df7ca4c328bdb4dd26b2c24e564e7bf"} Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.855019 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.871189 4775 scope.go:117] "RemoveContainer" containerID="f129e6fb2569b9211c30e0efaeb39b2cd1e17cc22d5bf1ca29c3fe367a9c5641" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.889502 4775 scope.go:117] "RemoveContainer" containerID="bab433a44a91d00b48c7db0ea978538ddecf5916dfbd71ec8f05cfa1024e5054" Nov 25 19:54:37 crc kubenswrapper[4775]: E1125 19:54:37.889938 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bab433a44a91d00b48c7db0ea978538ddecf5916dfbd71ec8f05cfa1024e5054\": container with ID starting with bab433a44a91d00b48c7db0ea978538ddecf5916dfbd71ec8f05cfa1024e5054 not found: ID does not exist" containerID="bab433a44a91d00b48c7db0ea978538ddecf5916dfbd71ec8f05cfa1024e5054" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.889966 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab433a44a91d00b48c7db0ea978538ddecf5916dfbd71ec8f05cfa1024e5054"} err="failed to get container status \"bab433a44a91d00b48c7db0ea978538ddecf5916dfbd71ec8f05cfa1024e5054\": rpc error: code = NotFound desc = could not find container \"bab433a44a91d00b48c7db0ea978538ddecf5916dfbd71ec8f05cfa1024e5054\": container with ID starting with bab433a44a91d00b48c7db0ea978538ddecf5916dfbd71ec8f05cfa1024e5054 not found: ID does not exist" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.889983 4775 scope.go:117] "RemoveContainer" containerID="f129e6fb2569b9211c30e0efaeb39b2cd1e17cc22d5bf1ca29c3fe367a9c5641" Nov 25 19:54:37 crc kubenswrapper[4775]: E1125 19:54:37.890340 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f129e6fb2569b9211c30e0efaeb39b2cd1e17cc22d5bf1ca29c3fe367a9c5641\": container with ID starting with f129e6fb2569b9211c30e0efaeb39b2cd1e17cc22d5bf1ca29c3fe367a9c5641 not found: ID does not exist" containerID="f129e6fb2569b9211c30e0efaeb39b2cd1e17cc22d5bf1ca29c3fe367a9c5641" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.890361 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f129e6fb2569b9211c30e0efaeb39b2cd1e17cc22d5bf1ca29c3fe367a9c5641"} err="failed to get container status \"f129e6fb2569b9211c30e0efaeb39b2cd1e17cc22d5bf1ca29c3fe367a9c5641\": rpc error: code = NotFound desc = could not find container \"f129e6fb2569b9211c30e0efaeb39b2cd1e17cc22d5bf1ca29c3fe367a9c5641\": container with ID starting with f129e6fb2569b9211c30e0efaeb39b2cd1e17cc22d5bf1ca29c3fe367a9c5641 not found: ID does not exist" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.890373 4775 scope.go:117] "RemoveContainer" containerID="eb7a5dfdc26fcf7e335f3efd93c43a8244bac17a21e78571e7107f35c2e61936" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.904002 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.918014 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.930831 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.940547 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.963893 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:54:37 crc kubenswrapper[4775]: E1125 19:54:37.964433 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="ceilometer-central-agent" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.964449 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="ceilometer-central-agent" Nov 25 19:54:37 crc kubenswrapper[4775]: E1125 19:54:37.964465 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae35de2-6d67-497a-840a-bc88ddf16205" containerName="nova-api-log" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.964472 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae35de2-6d67-497a-840a-bc88ddf16205" containerName="nova-api-log" Nov 25 19:54:37 crc kubenswrapper[4775]: E1125 19:54:37.964493 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="ceilometer-notification-agent" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.964501 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="ceilometer-notification-agent" Nov 25 19:54:37 crc kubenswrapper[4775]: E1125 19:54:37.964514 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="proxy-httpd" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.964521 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="proxy-httpd" Nov 25 19:54:37 crc kubenswrapper[4775]: E1125 19:54:37.964539 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="sg-core" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.964546 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="sg-core" Nov 25 19:54:37 crc kubenswrapper[4775]: E1125 19:54:37.964595 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae35de2-6d67-497a-840a-bc88ddf16205" containerName="nova-api-api" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.964603 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae35de2-6d67-497a-840a-bc88ddf16205" containerName="nova-api-api" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.964813 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae35de2-6d67-497a-840a-bc88ddf16205" containerName="nova-api-log" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.964841 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae35de2-6d67-497a-840a-bc88ddf16205" containerName="nova-api-api" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.964850 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="ceilometer-notification-agent" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.964860 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="sg-core" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.964880 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="ceilometer-central-agent" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.964891 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" containerName="proxy-httpd" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.966754 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.966867 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.967181 4775 scope.go:117] "RemoveContainer" containerID="f84a3439133f23cf78c637093c7250681f3bef560761048786c87c29c6724b5f" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.970257 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.970388 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.970336 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.975026 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.976882 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.979245 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.979245 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.979490 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 19:54:37 crc kubenswrapper[4775]: I1125 19:54:37.989331 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.027013 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh8f8\" (UniqueName: \"kubernetes.io/projected/679cadba-ff1b-4691-94c6-d218f83173f0-kube-api-access-jh8f8\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.027203 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.027320 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.027402 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679cadba-ff1b-4691-94c6-d218f83173f0-log-httpd\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.027482 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-scripts\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.027594 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.027706 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-config-data\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.027798 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679cadba-ff1b-4691-94c6-d218f83173f0-run-httpd\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.041792 4775 scope.go:117] "RemoveContainer" containerID="6ec847fd8830de0db9f16e31da1c18d43f271a59920d7ced65ab05ea6b97ab05" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.063106 4775 scope.go:117] "RemoveContainer" containerID="38aaed0bde9bb2d48d8ec4b4efeea20af2b7759ed4d20c10670d75cda2b8c87a" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.090186 4775 scope.go:117] "RemoveContainer" containerID="eb7a5dfdc26fcf7e335f3efd93c43a8244bac17a21e78571e7107f35c2e61936" Nov 25 19:54:38 crc kubenswrapper[4775]: E1125 19:54:38.091541 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb7a5dfdc26fcf7e335f3efd93c43a8244bac17a21e78571e7107f35c2e61936\": container with ID starting with eb7a5dfdc26fcf7e335f3efd93c43a8244bac17a21e78571e7107f35c2e61936 not found: ID does not exist" containerID="eb7a5dfdc26fcf7e335f3efd93c43a8244bac17a21e78571e7107f35c2e61936" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.091585 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb7a5dfdc26fcf7e335f3efd93c43a8244bac17a21e78571e7107f35c2e61936"} err="failed to get container status \"eb7a5dfdc26fcf7e335f3efd93c43a8244bac17a21e78571e7107f35c2e61936\": rpc error: code = NotFound desc = could not find container \"eb7a5dfdc26fcf7e335f3efd93c43a8244bac17a21e78571e7107f35c2e61936\": container with ID starting with eb7a5dfdc26fcf7e335f3efd93c43a8244bac17a21e78571e7107f35c2e61936 not found: ID does not exist" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.091618 4775 scope.go:117] "RemoveContainer" containerID="f84a3439133f23cf78c637093c7250681f3bef560761048786c87c29c6724b5f" Nov 25 19:54:38 crc kubenswrapper[4775]: E1125 19:54:38.095103 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f84a3439133f23cf78c637093c7250681f3bef560761048786c87c29c6724b5f\": container with ID starting with f84a3439133f23cf78c637093c7250681f3bef560761048786c87c29c6724b5f not found: ID does not exist" containerID="f84a3439133f23cf78c637093c7250681f3bef560761048786c87c29c6724b5f" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.095241 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f84a3439133f23cf78c637093c7250681f3bef560761048786c87c29c6724b5f"} err="failed to get container status \"f84a3439133f23cf78c637093c7250681f3bef560761048786c87c29c6724b5f\": rpc error: code = NotFound desc = could not find container \"f84a3439133f23cf78c637093c7250681f3bef560761048786c87c29c6724b5f\": container with ID starting with f84a3439133f23cf78c637093c7250681f3bef560761048786c87c29c6724b5f not found: ID does not exist" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.095270 4775 scope.go:117] "RemoveContainer" containerID="6ec847fd8830de0db9f16e31da1c18d43f271a59920d7ced65ab05ea6b97ab05" Nov 25 19:54:38 crc kubenswrapper[4775]: E1125 19:54:38.095957 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec847fd8830de0db9f16e31da1c18d43f271a59920d7ced65ab05ea6b97ab05\": container with ID starting with 6ec847fd8830de0db9f16e31da1c18d43f271a59920d7ced65ab05ea6b97ab05 not found: ID does not exist" containerID="6ec847fd8830de0db9f16e31da1c18d43f271a59920d7ced65ab05ea6b97ab05" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.096003 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec847fd8830de0db9f16e31da1c18d43f271a59920d7ced65ab05ea6b97ab05"} err="failed to get container status \"6ec847fd8830de0db9f16e31da1c18d43f271a59920d7ced65ab05ea6b97ab05\": rpc error: code = NotFound desc = could not find container \"6ec847fd8830de0db9f16e31da1c18d43f271a59920d7ced65ab05ea6b97ab05\": container with ID starting with 6ec847fd8830de0db9f16e31da1c18d43f271a59920d7ced65ab05ea6b97ab05 not found: ID does not exist" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.096021 4775 scope.go:117] "RemoveContainer" containerID="38aaed0bde9bb2d48d8ec4b4efeea20af2b7759ed4d20c10670d75cda2b8c87a" Nov 25 19:54:38 crc kubenswrapper[4775]: E1125 19:54:38.096234 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38aaed0bde9bb2d48d8ec4b4efeea20af2b7759ed4d20c10670d75cda2b8c87a\": container with ID starting with 38aaed0bde9bb2d48d8ec4b4efeea20af2b7759ed4d20c10670d75cda2b8c87a not found: ID does not exist" containerID="38aaed0bde9bb2d48d8ec4b4efeea20af2b7759ed4d20c10670d75cda2b8c87a" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.096257 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38aaed0bde9bb2d48d8ec4b4efeea20af2b7759ed4d20c10670d75cda2b8c87a"} err="failed to get container status \"38aaed0bde9bb2d48d8ec4b4efeea20af2b7759ed4d20c10670d75cda2b8c87a\": rpc error: code = NotFound desc = could not find container \"38aaed0bde9bb2d48d8ec4b4efeea20af2b7759ed4d20c10670d75cda2b8c87a\": container with ID starting with 38aaed0bde9bb2d48d8ec4b4efeea20af2b7759ed4d20c10670d75cda2b8c87a not found: ID does not exist" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.136487 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-config-data\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.136582 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679cadba-ff1b-4691-94c6-d218f83173f0-run-httpd\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.136614 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.136771 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cjvs\" (UniqueName: \"kubernetes.io/projected/73628ae6-a3da-4ce2-aba2-a59adddb6011-kube-api-access-6cjvs\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.136868 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh8f8\" (UniqueName: \"kubernetes.io/projected/679cadba-ff1b-4691-94c6-d218f83173f0-kube-api-access-jh8f8\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.136902 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-config-data\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.136933 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.136960 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.137007 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679cadba-ff1b-4691-94c6-d218f83173f0-log-httpd\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.137036 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-public-tls-certs\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.137082 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73628ae6-a3da-4ce2-aba2-a59adddb6011-logs\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.137130 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-scripts\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.137220 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.137266 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-internal-tls-certs\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.138895 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679cadba-ff1b-4691-94c6-d218f83173f0-run-httpd\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.142664 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-config-data\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.143385 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679cadba-ff1b-4691-94c6-d218f83173f0-log-httpd\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.144042 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.147419 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.147797 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-scripts\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.148164 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.155635 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh8f8\" (UniqueName: \"kubernetes.io/projected/679cadba-ff1b-4691-94c6-d218f83173f0-kube-api-access-jh8f8\") pod \"ceilometer-0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.241342 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-config-data\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.241407 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-public-tls-certs\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.241442 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73628ae6-a3da-4ce2-aba2-a59adddb6011-logs\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.241516 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-internal-tls-certs\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.241586 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.241610 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cjvs\" (UniqueName: \"kubernetes.io/projected/73628ae6-a3da-4ce2-aba2-a59adddb6011-kube-api-access-6cjvs\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.242543 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73628ae6-a3da-4ce2-aba2-a59adddb6011-logs\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.244918 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-public-tls-certs\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.246603 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.247557 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-internal-tls-certs\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.250586 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-config-data\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.259223 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cjvs\" (UniqueName: \"kubernetes.io/projected/73628ae6-a3da-4ce2-aba2-a59adddb6011-kube-api-access-6cjvs\") pod \"nova-api-0\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.338272 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.345916 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.811119 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.867401 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ae35de2-6d67-497a-840a-bc88ddf16205" path="/var/lib/kubelet/pods/2ae35de2-6d67-497a-840a-bc88ddf16205/volumes" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.868746 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe023d49-e66b-4643-b2c8-de9cdfee1c0f" path="/var/lib/kubelet/pods/fe023d49-e66b-4643-b2c8-de9cdfee1c0f/volumes" Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.874066 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679cadba-ff1b-4691-94c6-d218f83173f0","Type":"ContainerStarted","Data":"9f994a22961c3d5a9e9ca3e355e677a3807a8b8e115dc45adacc83cff510437d"} Nov 25 19:54:38 crc kubenswrapper[4775]: I1125 19:54:38.874941 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:39 crc kubenswrapper[4775]: I1125 19:54:39.213143 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:39 crc kubenswrapper[4775]: I1125 19:54:39.892751 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679cadba-ff1b-4691-94c6-d218f83173f0","Type":"ContainerStarted","Data":"2f4cb55c1ff5c2146f1c3fc6ccbaa9ee8b5ada727a0953d06c838c1f19287ac1"} Nov 25 19:54:39 crc kubenswrapper[4775]: I1125 19:54:39.896060 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"73628ae6-a3da-4ce2-aba2-a59adddb6011","Type":"ContainerStarted","Data":"8cf180f1589b466531c2692b826172adc2ebe86356acb359de720683f4443cac"} Nov 25 19:54:39 crc kubenswrapper[4775]: I1125 19:54:39.896105 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"73628ae6-a3da-4ce2-aba2-a59adddb6011","Type":"ContainerStarted","Data":"f608ad04ce4597c75f1d65eae64109317c28b2088551f8b307788a058006ed85"} Nov 25 19:54:39 crc kubenswrapper[4775]: I1125 19:54:39.896116 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"73628ae6-a3da-4ce2-aba2-a59adddb6011","Type":"ContainerStarted","Data":"ef45d30b32d6646732d909a24aaffdd6b7cc4b4a1dc177b80a99c50ac21ff2b6"} Nov 25 19:54:39 crc kubenswrapper[4775]: I1125 19:54:39.924945 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.924925548 podStartE2EDuration="2.924925548s" podCreationTimestamp="2025-11-25 19:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:39.918622069 +0000 UTC m=+1261.834984475" watchObservedRunningTime="2025-11-25 19:54:39.924925548 +0000 UTC m=+1261.841287914" Nov 25 19:54:40 crc kubenswrapper[4775]: I1125 19:54:40.924448 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679cadba-ff1b-4691-94c6-d218f83173f0","Type":"ContainerStarted","Data":"9e057e1ec13131d11ce237b684c14408c2dcb8728f5a5414c9f89c5e04eaba73"} Nov 25 19:54:41 crc kubenswrapper[4775]: I1125 19:54:41.399808 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:54:41 crc kubenswrapper[4775]: I1125 19:54:41.488824 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-c4ggj"] Nov 25 19:54:41 crc kubenswrapper[4775]: I1125 19:54:41.489052 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" podUID="8087212c-4a3c-486c-a884-fe42bbc31fa7" containerName="dnsmasq-dns" containerID="cri-o://c66b95a042c094d9287457daeedbe8ba96422a0454677269d717fd41a37b8967" gracePeriod=10 Nov 25 19:54:41 crc kubenswrapper[4775]: I1125 19:54:41.936737 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679cadba-ff1b-4691-94c6-d218f83173f0","Type":"ContainerStarted","Data":"b16e3e1a46a7b7e0140b968c536426a7e663a8a863acc04bb0cf33267a777fe2"} Nov 25 19:54:41 crc kubenswrapper[4775]: I1125 19:54:41.948169 4775 generic.go:334] "Generic (PLEG): container finished" podID="8087212c-4a3c-486c-a884-fe42bbc31fa7" containerID="c66b95a042c094d9287457daeedbe8ba96422a0454677269d717fd41a37b8967" exitCode=0 Nov 25 19:54:41 crc kubenswrapper[4775]: I1125 19:54:41.948216 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" event={"ID":"8087212c-4a3c-486c-a884-fe42bbc31fa7","Type":"ContainerDied","Data":"c66b95a042c094d9287457daeedbe8ba96422a0454677269d717fd41a37b8967"} Nov 25 19:54:41 crc kubenswrapper[4775]: I1125 19:54:41.948244 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" event={"ID":"8087212c-4a3c-486c-a884-fe42bbc31fa7","Type":"ContainerDied","Data":"2cceb2c1380fdd29c79df88fabde65384950092765a2f8c7a8f6a576572ac51c"} Nov 25 19:54:41 crc kubenswrapper[4775]: I1125 19:54:41.948259 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cceb2c1380fdd29c79df88fabde65384950092765a2f8c7a8f6a576572ac51c" Nov 25 19:54:41 crc kubenswrapper[4775]: I1125 19:54:41.949924 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.017181 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-dns-svc\") pod \"8087212c-4a3c-486c-a884-fe42bbc31fa7\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.017235 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-config\") pod \"8087212c-4a3c-486c-a884-fe42bbc31fa7\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.017255 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-894xs\" (UniqueName: \"kubernetes.io/projected/8087212c-4a3c-486c-a884-fe42bbc31fa7-kube-api-access-894xs\") pod \"8087212c-4a3c-486c-a884-fe42bbc31fa7\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.017368 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-ovsdbserver-sb\") pod \"8087212c-4a3c-486c-a884-fe42bbc31fa7\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.017443 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-ovsdbserver-nb\") pod \"8087212c-4a3c-486c-a884-fe42bbc31fa7\" (UID: \"8087212c-4a3c-486c-a884-fe42bbc31fa7\") " Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.022973 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8087212c-4a3c-486c-a884-fe42bbc31fa7-kube-api-access-894xs" (OuterVolumeSpecName: "kube-api-access-894xs") pod "8087212c-4a3c-486c-a884-fe42bbc31fa7" (UID: "8087212c-4a3c-486c-a884-fe42bbc31fa7"). InnerVolumeSpecName "kube-api-access-894xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.062276 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8087212c-4a3c-486c-a884-fe42bbc31fa7" (UID: "8087212c-4a3c-486c-a884-fe42bbc31fa7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.068278 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8087212c-4a3c-486c-a884-fe42bbc31fa7" (UID: "8087212c-4a3c-486c-a884-fe42bbc31fa7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.068317 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-config" (OuterVolumeSpecName: "config") pod "8087212c-4a3c-486c-a884-fe42bbc31fa7" (UID: "8087212c-4a3c-486c-a884-fe42bbc31fa7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.073027 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8087212c-4a3c-486c-a884-fe42bbc31fa7" (UID: "8087212c-4a3c-486c-a884-fe42bbc31fa7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.119634 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.119708 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.119724 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.119739 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-894xs\" (UniqueName: \"kubernetes.io/projected/8087212c-4a3c-486c-a884-fe42bbc31fa7-kube-api-access-894xs\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.119759 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8087212c-4a3c-486c-a884-fe42bbc31fa7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:42 crc kubenswrapper[4775]: I1125 19:54:42.962237 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-c4ggj" Nov 25 19:54:43 crc kubenswrapper[4775]: I1125 19:54:43.101968 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-c4ggj"] Nov 25 19:54:43 crc kubenswrapper[4775]: I1125 19:54:43.109404 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-c4ggj"] Nov 25 19:54:43 crc kubenswrapper[4775]: I1125 19:54:43.979261 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679cadba-ff1b-4691-94c6-d218f83173f0","Type":"ContainerStarted","Data":"f65e957378baba34ddf170a3f9de6327209cb32b15558a883ee2c867ae603009"} Nov 25 19:54:43 crc kubenswrapper[4775]: I1125 19:54:43.981177 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 19:54:44 crc kubenswrapper[4775]: I1125 19:54:44.013838 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.922439621 podStartE2EDuration="7.013811498s" podCreationTimestamp="2025-11-25 19:54:37 +0000 UTC" firstStartedPulling="2025-11-25 19:54:38.822015218 +0000 UTC m=+1260.738377594" lastFinishedPulling="2025-11-25 19:54:42.913387065 +0000 UTC m=+1264.829749471" observedRunningTime="2025-11-25 19:54:44.001175716 +0000 UTC m=+1265.917538162" watchObservedRunningTime="2025-11-25 19:54:44.013811498 +0000 UTC m=+1265.930173894" Nov 25 19:54:44 crc kubenswrapper[4775]: I1125 19:54:44.213813 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:44 crc kubenswrapper[4775]: I1125 19:54:44.235440 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:44 crc kubenswrapper[4775]: I1125 19:54:44.858389 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8087212c-4a3c-486c-a884-fe42bbc31fa7" path="/var/lib/kubelet/pods/8087212c-4a3c-486c-a884-fe42bbc31fa7/volumes" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.019166 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.307740 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-4xj8s"] Nov 25 19:54:45 crc kubenswrapper[4775]: E1125 19:54:45.308346 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8087212c-4a3c-486c-a884-fe42bbc31fa7" containerName="dnsmasq-dns" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.308363 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8087212c-4a3c-486c-a884-fe42bbc31fa7" containerName="dnsmasq-dns" Nov 25 19:54:45 crc kubenswrapper[4775]: E1125 19:54:45.308398 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8087212c-4a3c-486c-a884-fe42bbc31fa7" containerName="init" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.308404 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8087212c-4a3c-486c-a884-fe42bbc31fa7" containerName="init" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.308554 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8087212c-4a3c-486c-a884-fe42bbc31fa7" containerName="dnsmasq-dns" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.309226 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.312611 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.324124 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.325310 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4xj8s"] Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.383177 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phf7j\" (UniqueName: \"kubernetes.io/projected/4911bcab-f90c-4d9c-b1e3-743dd010d664-kube-api-access-phf7j\") pod \"nova-cell1-cell-mapping-4xj8s\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.383301 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-config-data\") pod \"nova-cell1-cell-mapping-4xj8s\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.383387 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4xj8s\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.383554 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-scripts\") pod \"nova-cell1-cell-mapping-4xj8s\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.485401 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-config-data\") pod \"nova-cell1-cell-mapping-4xj8s\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.485514 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4xj8s\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.485635 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-scripts\") pod \"nova-cell1-cell-mapping-4xj8s\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.485830 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phf7j\" (UniqueName: \"kubernetes.io/projected/4911bcab-f90c-4d9c-b1e3-743dd010d664-kube-api-access-phf7j\") pod \"nova-cell1-cell-mapping-4xj8s\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.493168 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4xj8s\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.494316 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-scripts\") pod \"nova-cell1-cell-mapping-4xj8s\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.505898 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phf7j\" (UniqueName: \"kubernetes.io/projected/4911bcab-f90c-4d9c-b1e3-743dd010d664-kube-api-access-phf7j\") pod \"nova-cell1-cell-mapping-4xj8s\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.510410 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-config-data\") pod \"nova-cell1-cell-mapping-4xj8s\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:45 crc kubenswrapper[4775]: I1125 19:54:45.628025 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:46 crc kubenswrapper[4775]: I1125 19:54:46.193079 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4xj8s"] Nov 25 19:54:46 crc kubenswrapper[4775]: W1125 19:54:46.195790 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4911bcab_f90c_4d9c_b1e3_743dd010d664.slice/crio-984912e795b6f21304a5b16060e2e47088352915506282706821a0dcbacf09f9 WatchSource:0}: Error finding container 984912e795b6f21304a5b16060e2e47088352915506282706821a0dcbacf09f9: Status 404 returned error can't find the container with id 984912e795b6f21304a5b16060e2e47088352915506282706821a0dcbacf09f9 Nov 25 19:54:47 crc kubenswrapper[4775]: I1125 19:54:47.020138 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4xj8s" event={"ID":"4911bcab-f90c-4d9c-b1e3-743dd010d664","Type":"ContainerStarted","Data":"4331e6b340bebce29a13b3e58ce23af16e329a10e5298575914e47713ebb723f"} Nov 25 19:54:47 crc kubenswrapper[4775]: I1125 19:54:47.020493 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4xj8s" event={"ID":"4911bcab-f90c-4d9c-b1e3-743dd010d664","Type":"ContainerStarted","Data":"984912e795b6f21304a5b16060e2e47088352915506282706821a0dcbacf09f9"} Nov 25 19:54:47 crc kubenswrapper[4775]: I1125 19:54:47.060454 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-4xj8s" podStartSLOduration=2.060426553 podStartE2EDuration="2.060426553s" podCreationTimestamp="2025-11-25 19:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:47.044290338 +0000 UTC m=+1268.960652734" watchObservedRunningTime="2025-11-25 19:54:47.060426553 +0000 UTC m=+1268.976788949" Nov 25 19:54:48 crc kubenswrapper[4775]: I1125 19:54:48.347091 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 19:54:48 crc kubenswrapper[4775]: I1125 19:54:48.347315 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 19:54:49 crc kubenswrapper[4775]: I1125 19:54:49.358757 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="73628ae6-a3da-4ce2-aba2-a59adddb6011" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.181:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 19:54:49 crc kubenswrapper[4775]: I1125 19:54:49.358764 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="73628ae6-a3da-4ce2-aba2-a59adddb6011" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.181:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 19:54:51 crc kubenswrapper[4775]: I1125 19:54:51.074507 4775 generic.go:334] "Generic (PLEG): container finished" podID="4911bcab-f90c-4d9c-b1e3-743dd010d664" containerID="4331e6b340bebce29a13b3e58ce23af16e329a10e5298575914e47713ebb723f" exitCode=0 Nov 25 19:54:51 crc kubenswrapper[4775]: I1125 19:54:51.074597 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4xj8s" event={"ID":"4911bcab-f90c-4d9c-b1e3-743dd010d664","Type":"ContainerDied","Data":"4331e6b340bebce29a13b3e58ce23af16e329a10e5298575914e47713ebb723f"} Nov 25 19:54:52 crc kubenswrapper[4775]: I1125 19:54:52.547763 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:52 crc kubenswrapper[4775]: I1125 19:54:52.734588 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phf7j\" (UniqueName: \"kubernetes.io/projected/4911bcab-f90c-4d9c-b1e3-743dd010d664-kube-api-access-phf7j\") pod \"4911bcab-f90c-4d9c-b1e3-743dd010d664\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " Nov 25 19:54:52 crc kubenswrapper[4775]: I1125 19:54:52.734728 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-combined-ca-bundle\") pod \"4911bcab-f90c-4d9c-b1e3-743dd010d664\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " Nov 25 19:54:52 crc kubenswrapper[4775]: I1125 19:54:52.734786 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-config-data\") pod \"4911bcab-f90c-4d9c-b1e3-743dd010d664\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " Nov 25 19:54:52 crc kubenswrapper[4775]: I1125 19:54:52.734838 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-scripts\") pod \"4911bcab-f90c-4d9c-b1e3-743dd010d664\" (UID: \"4911bcab-f90c-4d9c-b1e3-743dd010d664\") " Nov 25 19:54:52 crc kubenswrapper[4775]: I1125 19:54:52.741698 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-scripts" (OuterVolumeSpecName: "scripts") pod "4911bcab-f90c-4d9c-b1e3-743dd010d664" (UID: "4911bcab-f90c-4d9c-b1e3-743dd010d664"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:52 crc kubenswrapper[4775]: I1125 19:54:52.741817 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4911bcab-f90c-4d9c-b1e3-743dd010d664-kube-api-access-phf7j" (OuterVolumeSpecName: "kube-api-access-phf7j") pod "4911bcab-f90c-4d9c-b1e3-743dd010d664" (UID: "4911bcab-f90c-4d9c-b1e3-743dd010d664"). InnerVolumeSpecName "kube-api-access-phf7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:52 crc kubenswrapper[4775]: I1125 19:54:52.760049 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-config-data" (OuterVolumeSpecName: "config-data") pod "4911bcab-f90c-4d9c-b1e3-743dd010d664" (UID: "4911bcab-f90c-4d9c-b1e3-743dd010d664"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:52 crc kubenswrapper[4775]: I1125 19:54:52.779382 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4911bcab-f90c-4d9c-b1e3-743dd010d664" (UID: "4911bcab-f90c-4d9c-b1e3-743dd010d664"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:52 crc kubenswrapper[4775]: I1125 19:54:52.837019 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phf7j\" (UniqueName: \"kubernetes.io/projected/4911bcab-f90c-4d9c-b1e3-743dd010d664-kube-api-access-phf7j\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:52 crc kubenswrapper[4775]: I1125 19:54:52.837092 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:52 crc kubenswrapper[4775]: I1125 19:54:52.837120 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:52 crc kubenswrapper[4775]: I1125 19:54:52.837146 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4911bcab-f90c-4d9c-b1e3-743dd010d664-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:53 crc kubenswrapper[4775]: I1125 19:54:53.109069 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4xj8s" event={"ID":"4911bcab-f90c-4d9c-b1e3-743dd010d664","Type":"ContainerDied","Data":"984912e795b6f21304a5b16060e2e47088352915506282706821a0dcbacf09f9"} Nov 25 19:54:53 crc kubenswrapper[4775]: I1125 19:54:53.109128 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="984912e795b6f21304a5b16060e2e47088352915506282706821a0dcbacf09f9" Nov 25 19:54:53 crc kubenswrapper[4775]: I1125 19:54:53.109155 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4xj8s" Nov 25 19:54:53 crc kubenswrapper[4775]: I1125 19:54:53.327343 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:53 crc kubenswrapper[4775]: I1125 19:54:53.327742 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="73628ae6-a3da-4ce2-aba2-a59adddb6011" containerName="nova-api-log" containerID="cri-o://f608ad04ce4597c75f1d65eae64109317c28b2088551f8b307788a058006ed85" gracePeriod=30 Nov 25 19:54:53 crc kubenswrapper[4775]: I1125 19:54:53.327939 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="73628ae6-a3da-4ce2-aba2-a59adddb6011" containerName="nova-api-api" containerID="cri-o://8cf180f1589b466531c2692b826172adc2ebe86356acb359de720683f4443cac" gracePeriod=30 Nov 25 19:54:53 crc kubenswrapper[4775]: I1125 19:54:53.447362 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:54:53 crc kubenswrapper[4775]: I1125 19:54:53.447761 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a0a9d6a2-9240-4045-bafb-524ed57408bc" containerName="nova-scheduler-scheduler" containerID="cri-o://dc843060acfc7d8b539048b6937de4783099b49d7a27c88f2f1acc74a4dd3ebe" gracePeriod=30 Nov 25 19:54:53 crc kubenswrapper[4775]: I1125 19:54:53.461375 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:53 crc kubenswrapper[4775]: I1125 19:54:53.461823 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerName="nova-metadata-log" containerID="cri-o://24b00989c27a9fa294d2e00c57d91b28ebeff3339181270ee41382eee7fb3ee6" gracePeriod=30 Nov 25 19:54:53 crc kubenswrapper[4775]: I1125 19:54:53.461918 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerName="nova-metadata-metadata" containerID="cri-o://e68ca80ba7c1a4ef6ed3bba7c058b5c5daf5d0f4acfc6924b90e351cce7a510f" gracePeriod=30 Nov 25 19:54:54 crc kubenswrapper[4775]: I1125 19:54:54.123096 4775 generic.go:334] "Generic (PLEG): container finished" podID="73628ae6-a3da-4ce2-aba2-a59adddb6011" containerID="f608ad04ce4597c75f1d65eae64109317c28b2088551f8b307788a058006ed85" exitCode=143 Nov 25 19:54:54 crc kubenswrapper[4775]: I1125 19:54:54.123490 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"73628ae6-a3da-4ce2-aba2-a59adddb6011","Type":"ContainerDied","Data":"f608ad04ce4597c75f1d65eae64109317c28b2088551f8b307788a058006ed85"} Nov 25 19:54:54 crc kubenswrapper[4775]: I1125 19:54:54.131686 4775 generic.go:334] "Generic (PLEG): container finished" podID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerID="24b00989c27a9fa294d2e00c57d91b28ebeff3339181270ee41382eee7fb3ee6" exitCode=143 Nov 25 19:54:54 crc kubenswrapper[4775]: I1125 19:54:54.131741 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"402fddde-9ee1-4d1f-906c-1ca8379f3fa2","Type":"ContainerDied","Data":"24b00989c27a9fa294d2e00c57d91b28ebeff3339181270ee41382eee7fb3ee6"} Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.006067 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.120218 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a9d6a2-9240-4045-bafb-524ed57408bc-config-data\") pod \"a0a9d6a2-9240-4045-bafb-524ed57408bc\" (UID: \"a0a9d6a2-9240-4045-bafb-524ed57408bc\") " Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.120384 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a9d6a2-9240-4045-bafb-524ed57408bc-combined-ca-bundle\") pod \"a0a9d6a2-9240-4045-bafb-524ed57408bc\" (UID: \"a0a9d6a2-9240-4045-bafb-524ed57408bc\") " Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.120450 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdgll\" (UniqueName: \"kubernetes.io/projected/a0a9d6a2-9240-4045-bafb-524ed57408bc-kube-api-access-tdgll\") pod \"a0a9d6a2-9240-4045-bafb-524ed57408bc\" (UID: \"a0a9d6a2-9240-4045-bafb-524ed57408bc\") " Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.127601 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a9d6a2-9240-4045-bafb-524ed57408bc-kube-api-access-tdgll" (OuterVolumeSpecName: "kube-api-access-tdgll") pod "a0a9d6a2-9240-4045-bafb-524ed57408bc" (UID: "a0a9d6a2-9240-4045-bafb-524ed57408bc"). InnerVolumeSpecName "kube-api-access-tdgll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.150574 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a9d6a2-9240-4045-bafb-524ed57408bc-config-data" (OuterVolumeSpecName: "config-data") pod "a0a9d6a2-9240-4045-bafb-524ed57408bc" (UID: "a0a9d6a2-9240-4045-bafb-524ed57408bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.178982 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a9d6a2-9240-4045-bafb-524ed57408bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0a9d6a2-9240-4045-bafb-524ed57408bc" (UID: "a0a9d6a2-9240-4045-bafb-524ed57408bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.179790 4775 generic.go:334] "Generic (PLEG): container finished" podID="a0a9d6a2-9240-4045-bafb-524ed57408bc" containerID="dc843060acfc7d8b539048b6937de4783099b49d7a27c88f2f1acc74a4dd3ebe" exitCode=0 Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.179847 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0a9d6a2-9240-4045-bafb-524ed57408bc","Type":"ContainerDied","Data":"dc843060acfc7d8b539048b6937de4783099b49d7a27c88f2f1acc74a4dd3ebe"} Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.179897 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0a9d6a2-9240-4045-bafb-524ed57408bc","Type":"ContainerDied","Data":"84d6e9aae2fbd79679fcf4fcd08afea623c4053e4de3f07f1aec3265667ceff5"} Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.179927 4775 scope.go:117] "RemoveContainer" containerID="dc843060acfc7d8b539048b6937de4783099b49d7a27c88f2f1acc74a4dd3ebe" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.180157 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.222604 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdgll\" (UniqueName: \"kubernetes.io/projected/a0a9d6a2-9240-4045-bafb-524ed57408bc-kube-api-access-tdgll\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.222997 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a9d6a2-9240-4045-bafb-524ed57408bc-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.223014 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a9d6a2-9240-4045-bafb-524ed57408bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.262676 4775 scope.go:117] "RemoveContainer" containerID="dc843060acfc7d8b539048b6937de4783099b49d7a27c88f2f1acc74a4dd3ebe" Nov 25 19:54:56 crc kubenswrapper[4775]: E1125 19:54:56.264791 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc843060acfc7d8b539048b6937de4783099b49d7a27c88f2f1acc74a4dd3ebe\": container with ID starting with dc843060acfc7d8b539048b6937de4783099b49d7a27c88f2f1acc74a4dd3ebe not found: ID does not exist" containerID="dc843060acfc7d8b539048b6937de4783099b49d7a27c88f2f1acc74a4dd3ebe" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.265132 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc843060acfc7d8b539048b6937de4783099b49d7a27c88f2f1acc74a4dd3ebe"} err="failed to get container status \"dc843060acfc7d8b539048b6937de4783099b49d7a27c88f2f1acc74a4dd3ebe\": rpc error: code = NotFound desc = could not find container \"dc843060acfc7d8b539048b6937de4783099b49d7a27c88f2f1acc74a4dd3ebe\": container with ID starting with dc843060acfc7d8b539048b6937de4783099b49d7a27c88f2f1acc74a4dd3ebe not found: ID does not exist" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.305293 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.323318 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.333008 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:54:56 crc kubenswrapper[4775]: E1125 19:54:56.333364 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a9d6a2-9240-4045-bafb-524ed57408bc" containerName="nova-scheduler-scheduler" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.333382 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a9d6a2-9240-4045-bafb-524ed57408bc" containerName="nova-scheduler-scheduler" Nov 25 19:54:56 crc kubenswrapper[4775]: E1125 19:54:56.333416 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4911bcab-f90c-4d9c-b1e3-743dd010d664" containerName="nova-manage" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.333422 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4911bcab-f90c-4d9c-b1e3-743dd010d664" containerName="nova-manage" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.333584 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4911bcab-f90c-4d9c-b1e3-743dd010d664" containerName="nova-manage" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.333604 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a9d6a2-9240-4045-bafb-524ed57408bc" containerName="nova-scheduler-scheduler" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.334278 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.336865 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.342693 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.441000 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3d4749-58d7-41df-acc0-27538415babd-config-data\") pod \"nova-scheduler-0\" (UID: \"9d3d4749-58d7-41df-acc0-27538415babd\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.441118 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3d4749-58d7-41df-acc0-27538415babd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9d3d4749-58d7-41df-acc0-27538415babd\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.441189 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46knh\" (UniqueName: \"kubernetes.io/projected/9d3d4749-58d7-41df-acc0-27538415babd-kube-api-access-46knh\") pod \"nova-scheduler-0\" (UID: \"9d3d4749-58d7-41df-acc0-27538415babd\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.543063 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46knh\" (UniqueName: \"kubernetes.io/projected/9d3d4749-58d7-41df-acc0-27538415babd-kube-api-access-46knh\") pod \"nova-scheduler-0\" (UID: \"9d3d4749-58d7-41df-acc0-27538415babd\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.543220 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3d4749-58d7-41df-acc0-27538415babd-config-data\") pod \"nova-scheduler-0\" (UID: \"9d3d4749-58d7-41df-acc0-27538415babd\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.543259 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3d4749-58d7-41df-acc0-27538415babd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9d3d4749-58d7-41df-acc0-27538415babd\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.546540 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3d4749-58d7-41df-acc0-27538415babd-config-data\") pod \"nova-scheduler-0\" (UID: \"9d3d4749-58d7-41df-acc0-27538415babd\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.546857 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3d4749-58d7-41df-acc0-27538415babd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9d3d4749-58d7-41df-acc0-27538415babd\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.563192 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46knh\" (UniqueName: \"kubernetes.io/projected/9d3d4749-58d7-41df-acc0-27538415babd-kube-api-access-46knh\") pod \"nova-scheduler-0\" (UID: \"9d3d4749-58d7-41df-acc0-27538415babd\") " pod="openstack/nova-scheduler-0" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.588190 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.173:8775/\": read tcp 10.217.0.2:39714->10.217.0.173:8775: read: connection reset by peer" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.588264 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.173:8775/\": read tcp 10.217.0.2:39722->10.217.0.173:8775: read: connection reset by peer" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.655862 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.871275 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0a9d6a2-9240-4045-bafb-524ed57408bc" path="/var/lib/kubelet/pods/a0a9d6a2-9240-4045-bafb-524ed57408bc/volumes" Nov 25 19:54:56 crc kubenswrapper[4775]: I1125 19:54:56.962638 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.022961 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.052372 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-internal-tls-certs\") pod \"73628ae6-a3da-4ce2-aba2-a59adddb6011\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.052467 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-config-data\") pod \"73628ae6-a3da-4ce2-aba2-a59adddb6011\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.052490 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-nova-metadata-tls-certs\") pod \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.053036 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw9vq\" (UniqueName: \"kubernetes.io/projected/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-kube-api-access-fw9vq\") pod \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.053074 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-combined-ca-bundle\") pod \"73628ae6-a3da-4ce2-aba2-a59adddb6011\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.053180 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-logs\") pod \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.053248 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-combined-ca-bundle\") pod \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.053332 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73628ae6-a3da-4ce2-aba2-a59adddb6011-logs\") pod \"73628ae6-a3da-4ce2-aba2-a59adddb6011\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.053359 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cjvs\" (UniqueName: \"kubernetes.io/projected/73628ae6-a3da-4ce2-aba2-a59adddb6011-kube-api-access-6cjvs\") pod \"73628ae6-a3da-4ce2-aba2-a59adddb6011\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.053411 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-config-data\") pod \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\" (UID: \"402fddde-9ee1-4d1f-906c-1ca8379f3fa2\") " Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.053432 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-public-tls-certs\") pod \"73628ae6-a3da-4ce2-aba2-a59adddb6011\" (UID: \"73628ae6-a3da-4ce2-aba2-a59adddb6011\") " Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.056307 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73628ae6-a3da-4ce2-aba2-a59adddb6011-logs" (OuterVolumeSpecName: "logs") pod "73628ae6-a3da-4ce2-aba2-a59adddb6011" (UID: "73628ae6-a3da-4ce2-aba2-a59adddb6011"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.056544 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-logs" (OuterVolumeSpecName: "logs") pod "402fddde-9ee1-4d1f-906c-1ca8379f3fa2" (UID: "402fddde-9ee1-4d1f-906c-1ca8379f3fa2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.057767 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-kube-api-access-fw9vq" (OuterVolumeSpecName: "kube-api-access-fw9vq") pod "402fddde-9ee1-4d1f-906c-1ca8379f3fa2" (UID: "402fddde-9ee1-4d1f-906c-1ca8379f3fa2"). InnerVolumeSpecName "kube-api-access-fw9vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.060111 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73628ae6-a3da-4ce2-aba2-a59adddb6011-kube-api-access-6cjvs" (OuterVolumeSpecName: "kube-api-access-6cjvs") pod "73628ae6-a3da-4ce2-aba2-a59adddb6011" (UID: "73628ae6-a3da-4ce2-aba2-a59adddb6011"). InnerVolumeSpecName "kube-api-access-6cjvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.085869 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73628ae6-a3da-4ce2-aba2-a59adddb6011" (UID: "73628ae6-a3da-4ce2-aba2-a59adddb6011"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.091882 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-config-data" (OuterVolumeSpecName: "config-data") pod "73628ae6-a3da-4ce2-aba2-a59adddb6011" (UID: "73628ae6-a3da-4ce2-aba2-a59adddb6011"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.100134 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "402fddde-9ee1-4d1f-906c-1ca8379f3fa2" (UID: "402fddde-9ee1-4d1f-906c-1ca8379f3fa2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.108883 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "73628ae6-a3da-4ce2-aba2-a59adddb6011" (UID: "73628ae6-a3da-4ce2-aba2-a59adddb6011"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.113158 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-config-data" (OuterVolumeSpecName: "config-data") pod "402fddde-9ee1-4d1f-906c-1ca8379f3fa2" (UID: "402fddde-9ee1-4d1f-906c-1ca8379f3fa2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.116965 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "73628ae6-a3da-4ce2-aba2-a59adddb6011" (UID: "73628ae6-a3da-4ce2-aba2-a59adddb6011"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.122080 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "402fddde-9ee1-4d1f-906c-1ca8379f3fa2" (UID: "402fddde-9ee1-4d1f-906c-1ca8379f3fa2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.157971 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73628ae6-a3da-4ce2-aba2-a59adddb6011-logs\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.158004 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cjvs\" (UniqueName: \"kubernetes.io/projected/73628ae6-a3da-4ce2-aba2-a59adddb6011-kube-api-access-6cjvs\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.158018 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.158028 4775 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.158037 4775 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.158046 4775 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.158053 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.158061 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw9vq\" (UniqueName: \"kubernetes.io/projected/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-kube-api-access-fw9vq\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.158068 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73628ae6-a3da-4ce2-aba2-a59adddb6011-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.158077 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-logs\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.158086 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402fddde-9ee1-4d1f-906c-1ca8379f3fa2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.190363 4775 generic.go:334] "Generic (PLEG): container finished" podID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerID="e68ca80ba7c1a4ef6ed3bba7c058b5c5daf5d0f4acfc6924b90e351cce7a510f" exitCode=0 Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.190675 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"402fddde-9ee1-4d1f-906c-1ca8379f3fa2","Type":"ContainerDied","Data":"e68ca80ba7c1a4ef6ed3bba7c058b5c5daf5d0f4acfc6924b90e351cce7a510f"} Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.190705 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"402fddde-9ee1-4d1f-906c-1ca8379f3fa2","Type":"ContainerDied","Data":"57b6a36ec5e6c7d97cb6dea8d4a9ff6a6a6f0e8220712a03685a892f6de01d49"} Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.190720 4775 scope.go:117] "RemoveContainer" containerID="e68ca80ba7c1a4ef6ed3bba7c058b5c5daf5d0f4acfc6924b90e351cce7a510f" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.190822 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.193978 4775 generic.go:334] "Generic (PLEG): container finished" podID="73628ae6-a3da-4ce2-aba2-a59adddb6011" containerID="8cf180f1589b466531c2692b826172adc2ebe86356acb359de720683f4443cac" exitCode=0 Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.194076 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"73628ae6-a3da-4ce2-aba2-a59adddb6011","Type":"ContainerDied","Data":"8cf180f1589b466531c2692b826172adc2ebe86356acb359de720683f4443cac"} Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.194148 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"73628ae6-a3da-4ce2-aba2-a59adddb6011","Type":"ContainerDied","Data":"ef45d30b32d6646732d909a24aaffdd6b7cc4b4a1dc177b80a99c50ac21ff2b6"} Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.194248 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.211943 4775 scope.go:117] "RemoveContainer" containerID="24b00989c27a9fa294d2e00c57d91b28ebeff3339181270ee41382eee7fb3ee6" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.213626 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 19:54:57 crc kubenswrapper[4775]: W1125 19:54:57.217839 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d3d4749_58d7_41df_acc0_27538415babd.slice/crio-66b603b4cb210ae0762b1f7068effe65dcaddd7015d2ef97a28d8fcaebc8ac31 WatchSource:0}: Error finding container 66b603b4cb210ae0762b1f7068effe65dcaddd7015d2ef97a28d8fcaebc8ac31: Status 404 returned error can't find the container with id 66b603b4cb210ae0762b1f7068effe65dcaddd7015d2ef97a28d8fcaebc8ac31 Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.230457 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.238981 4775 scope.go:117] "RemoveContainer" containerID="e68ca80ba7c1a4ef6ed3bba7c058b5c5daf5d0f4acfc6924b90e351cce7a510f" Nov 25 19:54:57 crc kubenswrapper[4775]: E1125 19:54:57.239558 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e68ca80ba7c1a4ef6ed3bba7c058b5c5daf5d0f4acfc6924b90e351cce7a510f\": container with ID starting with e68ca80ba7c1a4ef6ed3bba7c058b5c5daf5d0f4acfc6924b90e351cce7a510f not found: ID does not exist" containerID="e68ca80ba7c1a4ef6ed3bba7c058b5c5daf5d0f4acfc6924b90e351cce7a510f" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.239597 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e68ca80ba7c1a4ef6ed3bba7c058b5c5daf5d0f4acfc6924b90e351cce7a510f"} err="failed to get container status \"e68ca80ba7c1a4ef6ed3bba7c058b5c5daf5d0f4acfc6924b90e351cce7a510f\": rpc error: code = NotFound desc = could not find container \"e68ca80ba7c1a4ef6ed3bba7c058b5c5daf5d0f4acfc6924b90e351cce7a510f\": container with ID starting with e68ca80ba7c1a4ef6ed3bba7c058b5c5daf5d0f4acfc6924b90e351cce7a510f not found: ID does not exist" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.239626 4775 scope.go:117] "RemoveContainer" containerID="24b00989c27a9fa294d2e00c57d91b28ebeff3339181270ee41382eee7fb3ee6" Nov 25 19:54:57 crc kubenswrapper[4775]: E1125 19:54:57.240733 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24b00989c27a9fa294d2e00c57d91b28ebeff3339181270ee41382eee7fb3ee6\": container with ID starting with 24b00989c27a9fa294d2e00c57d91b28ebeff3339181270ee41382eee7fb3ee6 not found: ID does not exist" containerID="24b00989c27a9fa294d2e00c57d91b28ebeff3339181270ee41382eee7fb3ee6" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.240757 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24b00989c27a9fa294d2e00c57d91b28ebeff3339181270ee41382eee7fb3ee6"} err="failed to get container status \"24b00989c27a9fa294d2e00c57d91b28ebeff3339181270ee41382eee7fb3ee6\": rpc error: code = NotFound desc = could not find container \"24b00989c27a9fa294d2e00c57d91b28ebeff3339181270ee41382eee7fb3ee6\": container with ID starting with 24b00989c27a9fa294d2e00c57d91b28ebeff3339181270ee41382eee7fb3ee6 not found: ID does not exist" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.240775 4775 scope.go:117] "RemoveContainer" containerID="8cf180f1589b466531c2692b826172adc2ebe86356acb359de720683f4443cac" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.268349 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.277630 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.289148 4775 scope.go:117] "RemoveContainer" containerID="f608ad04ce4597c75f1d65eae64109317c28b2088551f8b307788a058006ed85" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.305239 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.322499 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:57 crc kubenswrapper[4775]: E1125 19:54:57.323268 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerName="nova-metadata-metadata" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.323294 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerName="nova-metadata-metadata" Nov 25 19:54:57 crc kubenswrapper[4775]: E1125 19:54:57.323311 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73628ae6-a3da-4ce2-aba2-a59adddb6011" containerName="nova-api-api" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.323320 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="73628ae6-a3da-4ce2-aba2-a59adddb6011" containerName="nova-api-api" Nov 25 19:54:57 crc kubenswrapper[4775]: E1125 19:54:57.323337 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73628ae6-a3da-4ce2-aba2-a59adddb6011" containerName="nova-api-log" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.323345 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="73628ae6-a3da-4ce2-aba2-a59adddb6011" containerName="nova-api-log" Nov 25 19:54:57 crc kubenswrapper[4775]: E1125 19:54:57.323369 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerName="nova-metadata-log" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.323376 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerName="nova-metadata-log" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.323743 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="73628ae6-a3da-4ce2-aba2-a59adddb6011" containerName="nova-api-api" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.323766 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerName="nova-metadata-metadata" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.323802 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" containerName="nova-metadata-log" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.323816 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="73628ae6-a3da-4ce2-aba2-a59adddb6011" containerName="nova-api-log" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.325504 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.329279 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.330534 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.330714 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.330836 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.332699 4775 scope.go:117] "RemoveContainer" containerID="8cf180f1589b466531c2692b826172adc2ebe86356acb359de720683f4443cac" Nov 25 19:54:57 crc kubenswrapper[4775]: E1125 19:54:57.333206 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cf180f1589b466531c2692b826172adc2ebe86356acb359de720683f4443cac\": container with ID starting with 8cf180f1589b466531c2692b826172adc2ebe86356acb359de720683f4443cac not found: ID does not exist" containerID="8cf180f1589b466531c2692b826172adc2ebe86356acb359de720683f4443cac" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.333243 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cf180f1589b466531c2692b826172adc2ebe86356acb359de720683f4443cac"} err="failed to get container status \"8cf180f1589b466531c2692b826172adc2ebe86356acb359de720683f4443cac\": rpc error: code = NotFound desc = could not find container \"8cf180f1589b466531c2692b826172adc2ebe86356acb359de720683f4443cac\": container with ID starting with 8cf180f1589b466531c2692b826172adc2ebe86356acb359de720683f4443cac not found: ID does not exist" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.333263 4775 scope.go:117] "RemoveContainer" containerID="f608ad04ce4597c75f1d65eae64109317c28b2088551f8b307788a058006ed85" Nov 25 19:54:57 crc kubenswrapper[4775]: E1125 19:54:57.333435 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f608ad04ce4597c75f1d65eae64109317c28b2088551f8b307788a058006ed85\": container with ID starting with f608ad04ce4597c75f1d65eae64109317c28b2088551f8b307788a058006ed85 not found: ID does not exist" containerID="f608ad04ce4597c75f1d65eae64109317c28b2088551f8b307788a058006ed85" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.333451 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f608ad04ce4597c75f1d65eae64109317c28b2088551f8b307788a058006ed85"} err="failed to get container status \"f608ad04ce4597c75f1d65eae64109317c28b2088551f8b307788a058006ed85\": rpc error: code = NotFound desc = could not find container \"f608ad04ce4597c75f1d65eae64109317c28b2088551f8b307788a058006ed85\": container with ID starting with f608ad04ce4597c75f1d65eae64109317c28b2088551f8b307788a058006ed85 not found: ID does not exist" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.337026 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.341324 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.343804 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.343979 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.345871 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.361706 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/362aa62d-5f99-4f83-9996-e564df5182e1-public-tls-certs\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.361758 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362aa62d-5f99-4f83-9996-e564df5182e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.361810 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.361831 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/362aa62d-5f99-4f83-9996-e564df5182e1-config-data\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.361862 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t6zw\" (UniqueName: \"kubernetes.io/projected/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-kube-api-access-6t6zw\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.361896 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.361917 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h96qg\" (UniqueName: \"kubernetes.io/projected/362aa62d-5f99-4f83-9996-e564df5182e1-kube-api-access-h96qg\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.361968 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-logs\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.361990 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-config-data\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.362185 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/362aa62d-5f99-4f83-9996-e564df5182e1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.362254 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/362aa62d-5f99-4f83-9996-e564df5182e1-logs\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.463604 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-logs\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.463660 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-config-data\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.463702 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/362aa62d-5f99-4f83-9996-e564df5182e1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.463731 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/362aa62d-5f99-4f83-9996-e564df5182e1-logs\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.463766 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/362aa62d-5f99-4f83-9996-e564df5182e1-public-tls-certs\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.463790 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362aa62d-5f99-4f83-9996-e564df5182e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.463822 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.463841 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/362aa62d-5f99-4f83-9996-e564df5182e1-config-data\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.463864 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t6zw\" (UniqueName: \"kubernetes.io/projected/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-kube-api-access-6t6zw\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.463894 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.463913 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h96qg\" (UniqueName: \"kubernetes.io/projected/362aa62d-5f99-4f83-9996-e564df5182e1-kube-api-access-h96qg\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.466228 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/362aa62d-5f99-4f83-9996-e564df5182e1-logs\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.466953 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-logs\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.467489 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362aa62d-5f99-4f83-9996-e564df5182e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.468062 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/362aa62d-5f99-4f83-9996-e564df5182e1-public-tls-certs\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.468473 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.468709 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/362aa62d-5f99-4f83-9996-e564df5182e1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.470031 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/362aa62d-5f99-4f83-9996-e564df5182e1-config-data\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.470331 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.468351 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-config-data\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.480327 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t6zw\" (UniqueName: \"kubernetes.io/projected/678c6cb5-0c57-4a2f-8312-01c3230d1ff8-kube-api-access-6t6zw\") pod \"nova-metadata-0\" (UID: \"678c6cb5-0c57-4a2f-8312-01c3230d1ff8\") " pod="openstack/nova-metadata-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.482068 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h96qg\" (UniqueName: \"kubernetes.io/projected/362aa62d-5f99-4f83-9996-e564df5182e1-kube-api-access-h96qg\") pod \"nova-api-0\" (UID: \"362aa62d-5f99-4f83-9996-e564df5182e1\") " pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.647069 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 19:54:57 crc kubenswrapper[4775]: I1125 19:54:57.675537 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 19:54:58 crc kubenswrapper[4775]: I1125 19:54:58.121277 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 19:54:58 crc kubenswrapper[4775]: W1125 19:54:58.146319 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod678c6cb5_0c57_4a2f_8312_01c3230d1ff8.slice/crio-510d8443ae98fe307834de01ec78576bdc48fb9faea9d5119f325dfdb2b78dcc WatchSource:0}: Error finding container 510d8443ae98fe307834de01ec78576bdc48fb9faea9d5119f325dfdb2b78dcc: Status 404 returned error can't find the container with id 510d8443ae98fe307834de01ec78576bdc48fb9faea9d5119f325dfdb2b78dcc Nov 25 19:54:58 crc kubenswrapper[4775]: I1125 19:54:58.204719 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9d3d4749-58d7-41df-acc0-27538415babd","Type":"ContainerStarted","Data":"262fa324d107b66c035f06b4df82a008f716f85dbcea7d26c6829715ac8d0774"} Nov 25 19:54:58 crc kubenswrapper[4775]: I1125 19:54:58.204794 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9d3d4749-58d7-41df-acc0-27538415babd","Type":"ContainerStarted","Data":"66b603b4cb210ae0762b1f7068effe65dcaddd7015d2ef97a28d8fcaebc8ac31"} Nov 25 19:54:58 crc kubenswrapper[4775]: I1125 19:54:58.209544 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"678c6cb5-0c57-4a2f-8312-01c3230d1ff8","Type":"ContainerStarted","Data":"510d8443ae98fe307834de01ec78576bdc48fb9faea9d5119f325dfdb2b78dcc"} Nov 25 19:54:58 crc kubenswrapper[4775]: I1125 19:54:58.227800 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.227782389 podStartE2EDuration="2.227782389s" podCreationTimestamp="2025-11-25 19:54:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:58.217776579 +0000 UTC m=+1280.134138945" watchObservedRunningTime="2025-11-25 19:54:58.227782389 +0000 UTC m=+1280.144144755" Nov 25 19:54:58 crc kubenswrapper[4775]: I1125 19:54:58.269279 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 19:54:58 crc kubenswrapper[4775]: W1125 19:54:58.278618 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod362aa62d_5f99_4f83_9996_e564df5182e1.slice/crio-c3ba99e2e3c12b9a36f65ac407130194f85bd86ec4792172d7b35ad31ea7c908 WatchSource:0}: Error finding container c3ba99e2e3c12b9a36f65ac407130194f85bd86ec4792172d7b35ad31ea7c908: Status 404 returned error can't find the container with id c3ba99e2e3c12b9a36f65ac407130194f85bd86ec4792172d7b35ad31ea7c908 Nov 25 19:54:58 crc kubenswrapper[4775]: I1125 19:54:58.859195 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402fddde-9ee1-4d1f-906c-1ca8379f3fa2" path="/var/lib/kubelet/pods/402fddde-9ee1-4d1f-906c-1ca8379f3fa2/volumes" Nov 25 19:54:58 crc kubenswrapper[4775]: I1125 19:54:58.860326 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73628ae6-a3da-4ce2-aba2-a59adddb6011" path="/var/lib/kubelet/pods/73628ae6-a3da-4ce2-aba2-a59adddb6011/volumes" Nov 25 19:54:59 crc kubenswrapper[4775]: I1125 19:54:59.222798 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"678c6cb5-0c57-4a2f-8312-01c3230d1ff8","Type":"ContainerStarted","Data":"1af93639ced3bbc8adac2164e5255349a3fc2e409f51148ce78e37b07944f184"} Nov 25 19:54:59 crc kubenswrapper[4775]: I1125 19:54:59.222890 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"678c6cb5-0c57-4a2f-8312-01c3230d1ff8","Type":"ContainerStarted","Data":"f4d5d5eecf755f3a859807ef964da6132a475fff01e57a24676246fe5b2ac67b"} Nov 25 19:54:59 crc kubenswrapper[4775]: I1125 19:54:59.228957 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"362aa62d-5f99-4f83-9996-e564df5182e1","Type":"ContainerStarted","Data":"97e9d049870ff336abb8a093d3a748674ebf9bc4b03140f3f08434508df334c7"} Nov 25 19:54:59 crc kubenswrapper[4775]: I1125 19:54:59.229006 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"362aa62d-5f99-4f83-9996-e564df5182e1","Type":"ContainerStarted","Data":"e7e973bd797d85eb8f7c265119b96aede47cd5ae87a9dc175d3c8c830e6535e7"} Nov 25 19:54:59 crc kubenswrapper[4775]: I1125 19:54:59.229021 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"362aa62d-5f99-4f83-9996-e564df5182e1","Type":"ContainerStarted","Data":"c3ba99e2e3c12b9a36f65ac407130194f85bd86ec4792172d7b35ad31ea7c908"} Nov 25 19:54:59 crc kubenswrapper[4775]: I1125 19:54:59.258387 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.258360107 podStartE2EDuration="2.258360107s" podCreationTimestamp="2025-11-25 19:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:59.253940838 +0000 UTC m=+1281.170303244" watchObservedRunningTime="2025-11-25 19:54:59.258360107 +0000 UTC m=+1281.174722513" Nov 25 19:54:59 crc kubenswrapper[4775]: I1125 19:54:59.287359 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.28733901 podStartE2EDuration="2.28733901s" podCreationTimestamp="2025-11-25 19:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:54:59.281954714 +0000 UTC m=+1281.198317090" watchObservedRunningTime="2025-11-25 19:54:59.28733901 +0000 UTC m=+1281.203701386" Nov 25 19:55:01 crc kubenswrapper[4775]: I1125 19:55:01.657334 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 19:55:02 crc kubenswrapper[4775]: I1125 19:55:02.676869 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 19:55:02 crc kubenswrapper[4775]: I1125 19:55:02.677547 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 19:55:06 crc kubenswrapper[4775]: I1125 19:55:06.658576 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 19:55:06 crc kubenswrapper[4775]: I1125 19:55:06.706589 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 19:55:07 crc kubenswrapper[4775]: I1125 19:55:07.363252 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 19:55:07 crc kubenswrapper[4775]: I1125 19:55:07.647963 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 19:55:07 crc kubenswrapper[4775]: I1125 19:55:07.648194 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 19:55:07 crc kubenswrapper[4775]: I1125 19:55:07.676881 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 19:55:07 crc kubenswrapper[4775]: I1125 19:55:07.676936 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 19:55:08 crc kubenswrapper[4775]: I1125 19:55:08.345472 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 19:55:08 crc kubenswrapper[4775]: I1125 19:55:08.658867 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="362aa62d-5f99-4f83-9996-e564df5182e1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 19:55:08 crc kubenswrapper[4775]: I1125 19:55:08.658883 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="362aa62d-5f99-4f83-9996-e564df5182e1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 19:55:08 crc kubenswrapper[4775]: I1125 19:55:08.690862 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="678c6cb5-0c57-4a2f-8312-01c3230d1ff8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.185:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 19:55:08 crc kubenswrapper[4775]: I1125 19:55:08.691080 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="678c6cb5-0c57-4a2f-8312-01c3230d1ff8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.185:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 19:55:17 crc kubenswrapper[4775]: I1125 19:55:17.654006 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 19:55:17 crc kubenswrapper[4775]: I1125 19:55:17.655488 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 19:55:17 crc kubenswrapper[4775]: I1125 19:55:17.656945 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 19:55:17 crc kubenswrapper[4775]: I1125 19:55:17.687235 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 19:55:17 crc kubenswrapper[4775]: I1125 19:55:17.689090 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 19:55:17 crc kubenswrapper[4775]: I1125 19:55:17.689378 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 19:55:17 crc kubenswrapper[4775]: I1125 19:55:17.698469 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 19:55:17 crc kubenswrapper[4775]: I1125 19:55:17.703825 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 19:55:18 crc kubenswrapper[4775]: I1125 19:55:18.446219 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 19:55:18 crc kubenswrapper[4775]: I1125 19:55:18.455811 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 19:55:26 crc kubenswrapper[4775]: I1125 19:55:26.275750 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 19:55:27 crc kubenswrapper[4775]: I1125 19:55:27.345206 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 19:55:30 crc kubenswrapper[4775]: I1125 19:55:30.844667 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="50995ab5-ef22-4466-9906-fab208c9a82d" containerName="rabbitmq" containerID="cri-o://f9fffdec49e2431767bdf0527f58acdbdead4c1dd5782a9e9de7f9ec74f041e4" gracePeriod=604796 Nov 25 19:55:31 crc kubenswrapper[4775]: I1125 19:55:31.707970 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" containerName="rabbitmq" containerID="cri-o://8b428b0b1f29761fa52693fc6473a2f550a49875260b184511dffc6f48656f45" gracePeriod=604796 Nov 25 19:55:35 crc kubenswrapper[4775]: I1125 19:55:35.465072 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="50995ab5-ef22-4466-9906-fab208c9a82d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Nov 25 19:55:35 crc kubenswrapper[4775]: I1125 19:55:35.803972 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.477709 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.579345 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-plugins\") pod \"50995ab5-ef22-4466-9906-fab208c9a82d\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.579408 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-plugins-conf\") pod \"50995ab5-ef22-4466-9906-fab208c9a82d\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.579438 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-tls\") pod \"50995ab5-ef22-4466-9906-fab208c9a82d\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.579482 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-server-conf\") pod \"50995ab5-ef22-4466-9906-fab208c9a82d\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.580181 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "50995ab5-ef22-4466-9906-fab208c9a82d" (UID: "50995ab5-ef22-4466-9906-fab208c9a82d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.580591 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-config-data\") pod \"50995ab5-ef22-4466-9906-fab208c9a82d\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.580629 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-erlang-cookie\") pod \"50995ab5-ef22-4466-9906-fab208c9a82d\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.580659 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-confd\") pod \"50995ab5-ef22-4466-9906-fab208c9a82d\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.580757 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50995ab5-ef22-4466-9906-fab208c9a82d-pod-info\") pod \"50995ab5-ef22-4466-9906-fab208c9a82d\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.580791 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50995ab5-ef22-4466-9906-fab208c9a82d-erlang-cookie-secret\") pod \"50995ab5-ef22-4466-9906-fab208c9a82d\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.580825 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"50995ab5-ef22-4466-9906-fab208c9a82d\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.580862 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv7c6\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-kube-api-access-lv7c6\") pod \"50995ab5-ef22-4466-9906-fab208c9a82d\" (UID: \"50995ab5-ef22-4466-9906-fab208c9a82d\") " Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.581157 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "50995ab5-ef22-4466-9906-fab208c9a82d" (UID: "50995ab5-ef22-4466-9906-fab208c9a82d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.580666 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "50995ab5-ef22-4466-9906-fab208c9a82d" (UID: "50995ab5-ef22-4466-9906-fab208c9a82d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.582044 4775 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.582071 4775 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.582090 4775 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.590600 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "50995ab5-ef22-4466-9906-fab208c9a82d" (UID: "50995ab5-ef22-4466-9906-fab208c9a82d"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.590783 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "50995ab5-ef22-4466-9906-fab208c9a82d" (UID: "50995ab5-ef22-4466-9906-fab208c9a82d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.591069 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/50995ab5-ef22-4466-9906-fab208c9a82d-pod-info" (OuterVolumeSpecName: "pod-info") pod "50995ab5-ef22-4466-9906-fab208c9a82d" (UID: "50995ab5-ef22-4466-9906-fab208c9a82d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.595173 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50995ab5-ef22-4466-9906-fab208c9a82d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "50995ab5-ef22-4466-9906-fab208c9a82d" (UID: "50995ab5-ef22-4466-9906-fab208c9a82d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.601290 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-kube-api-access-lv7c6" (OuterVolumeSpecName: "kube-api-access-lv7c6") pod "50995ab5-ef22-4466-9906-fab208c9a82d" (UID: "50995ab5-ef22-4466-9906-fab208c9a82d"). InnerVolumeSpecName "kube-api-access-lv7c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.628118 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-config-data" (OuterVolumeSpecName: "config-data") pod "50995ab5-ef22-4466-9906-fab208c9a82d" (UID: "50995ab5-ef22-4466-9906-fab208c9a82d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.655190 4775 generic.go:334] "Generic (PLEG): container finished" podID="50995ab5-ef22-4466-9906-fab208c9a82d" containerID="f9fffdec49e2431767bdf0527f58acdbdead4c1dd5782a9e9de7f9ec74f041e4" exitCode=0 Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.655246 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"50995ab5-ef22-4466-9906-fab208c9a82d","Type":"ContainerDied","Data":"f9fffdec49e2431767bdf0527f58acdbdead4c1dd5782a9e9de7f9ec74f041e4"} Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.655281 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"50995ab5-ef22-4466-9906-fab208c9a82d","Type":"ContainerDied","Data":"91fb4c07177e013ec10ef171ef6f315ecb1509843dcc02aab6e4165cd413f88b"} Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.655307 4775 scope.go:117] "RemoveContainer" containerID="f9fffdec49e2431767bdf0527f58acdbdead4c1dd5782a9e9de7f9ec74f041e4" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.655483 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.665977 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-server-conf" (OuterVolumeSpecName: "server-conf") pod "50995ab5-ef22-4466-9906-fab208c9a82d" (UID: "50995ab5-ef22-4466-9906-fab208c9a82d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.687083 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.687123 4775 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50995ab5-ef22-4466-9906-fab208c9a82d-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.687137 4775 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50995ab5-ef22-4466-9906-fab208c9a82d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.687164 4775 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.687177 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv7c6\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-kube-api-access-lv7c6\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.687188 4775 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.687197 4775 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50995ab5-ef22-4466-9906-fab208c9a82d-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.728814 4775 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.737141 4775 scope.go:117] "RemoveContainer" containerID="eca5654027a0fbf8762eb645c694f281694d16185581ba50a6c0f812a8d51bf0" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.746196 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "50995ab5-ef22-4466-9906-fab208c9a82d" (UID: "50995ab5-ef22-4466-9906-fab208c9a82d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.755277 4775 scope.go:117] "RemoveContainer" containerID="f9fffdec49e2431767bdf0527f58acdbdead4c1dd5782a9e9de7f9ec74f041e4" Nov 25 19:55:37 crc kubenswrapper[4775]: E1125 19:55:37.755631 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9fffdec49e2431767bdf0527f58acdbdead4c1dd5782a9e9de7f9ec74f041e4\": container with ID starting with f9fffdec49e2431767bdf0527f58acdbdead4c1dd5782a9e9de7f9ec74f041e4 not found: ID does not exist" containerID="f9fffdec49e2431767bdf0527f58acdbdead4c1dd5782a9e9de7f9ec74f041e4" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.755748 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9fffdec49e2431767bdf0527f58acdbdead4c1dd5782a9e9de7f9ec74f041e4"} err="failed to get container status \"f9fffdec49e2431767bdf0527f58acdbdead4c1dd5782a9e9de7f9ec74f041e4\": rpc error: code = NotFound desc = could not find container \"f9fffdec49e2431767bdf0527f58acdbdead4c1dd5782a9e9de7f9ec74f041e4\": container with ID starting with f9fffdec49e2431767bdf0527f58acdbdead4c1dd5782a9e9de7f9ec74f041e4 not found: ID does not exist" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.755779 4775 scope.go:117] "RemoveContainer" containerID="eca5654027a0fbf8762eb645c694f281694d16185581ba50a6c0f812a8d51bf0" Nov 25 19:55:37 crc kubenswrapper[4775]: E1125 19:55:37.758063 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca5654027a0fbf8762eb645c694f281694d16185581ba50a6c0f812a8d51bf0\": container with ID starting with eca5654027a0fbf8762eb645c694f281694d16185581ba50a6c0f812a8d51bf0 not found: ID does not exist" containerID="eca5654027a0fbf8762eb645c694f281694d16185581ba50a6c0f812a8d51bf0" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.758123 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca5654027a0fbf8762eb645c694f281694d16185581ba50a6c0f812a8d51bf0"} err="failed to get container status \"eca5654027a0fbf8762eb645c694f281694d16185581ba50a6c0f812a8d51bf0\": rpc error: code = NotFound desc = could not find container \"eca5654027a0fbf8762eb645c694f281694d16185581ba50a6c0f812a8d51bf0\": container with ID starting with eca5654027a0fbf8762eb645c694f281694d16185581ba50a6c0f812a8d51bf0 not found: ID does not exist" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.789571 4775 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50995ab5-ef22-4466-9906-fab208c9a82d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.789603 4775 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:37 crc kubenswrapper[4775]: I1125 19:55:37.995224 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.008905 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.021435 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 19:55:38 crc kubenswrapper[4775]: E1125 19:55:38.021912 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50995ab5-ef22-4466-9906-fab208c9a82d" containerName="setup-container" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.021934 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="50995ab5-ef22-4466-9906-fab208c9a82d" containerName="setup-container" Nov 25 19:55:38 crc kubenswrapper[4775]: E1125 19:55:38.021969 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50995ab5-ef22-4466-9906-fab208c9a82d" containerName="rabbitmq" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.021979 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="50995ab5-ef22-4466-9906-fab208c9a82d" containerName="rabbitmq" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.022240 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="50995ab5-ef22-4466-9906-fab208c9a82d" containerName="rabbitmq" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.031141 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.033477 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.033520 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.033534 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.033517 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.035054 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6vrgj" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.035244 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.035435 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.043463 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.196653 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw52s\" (UniqueName: \"kubernetes.io/projected/727ee9a0-e30b-4915-a405-c68a73d8a6e2-kube-api-access-jw52s\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.196709 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/727ee9a0-e30b-4915-a405-c68a73d8a6e2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.196731 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/727ee9a0-e30b-4915-a405-c68a73d8a6e2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.196766 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/727ee9a0-e30b-4915-a405-c68a73d8a6e2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.196794 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/727ee9a0-e30b-4915-a405-c68a73d8a6e2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.196811 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.197102 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/727ee9a0-e30b-4915-a405-c68a73d8a6e2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.197160 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/727ee9a0-e30b-4915-a405-c68a73d8a6e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.197201 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/727ee9a0-e30b-4915-a405-c68a73d8a6e2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.197237 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/727ee9a0-e30b-4915-a405-c68a73d8a6e2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.197297 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/727ee9a0-e30b-4915-a405-c68a73d8a6e2-config-data\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.287567 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.301078 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/727ee9a0-e30b-4915-a405-c68a73d8a6e2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.301158 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/727ee9a0-e30b-4915-a405-c68a73d8a6e2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.301222 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/727ee9a0-e30b-4915-a405-c68a73d8a6e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.301256 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/727ee9a0-e30b-4915-a405-c68a73d8a6e2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.301721 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/727ee9a0-e30b-4915-a405-c68a73d8a6e2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.301661 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/727ee9a0-e30b-4915-a405-c68a73d8a6e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.301827 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/727ee9a0-e30b-4915-a405-c68a73d8a6e2-config-data\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.301920 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw52s\" (UniqueName: \"kubernetes.io/projected/727ee9a0-e30b-4915-a405-c68a73d8a6e2-kube-api-access-jw52s\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.301954 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/727ee9a0-e30b-4915-a405-c68a73d8a6e2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.302984 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/727ee9a0-e30b-4915-a405-c68a73d8a6e2-config-data\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.303042 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/727ee9a0-e30b-4915-a405-c68a73d8a6e2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.303483 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/727ee9a0-e30b-4915-a405-c68a73d8a6e2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.303546 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/727ee9a0-e30b-4915-a405-c68a73d8a6e2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.303570 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.303691 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/727ee9a0-e30b-4915-a405-c68a73d8a6e2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.303869 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.304067 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/727ee9a0-e30b-4915-a405-c68a73d8a6e2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.312038 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/727ee9a0-e30b-4915-a405-c68a73d8a6e2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.314405 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/727ee9a0-e30b-4915-a405-c68a73d8a6e2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.319231 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/727ee9a0-e30b-4915-a405-c68a73d8a6e2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.323292 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/727ee9a0-e30b-4915-a405-c68a73d8a6e2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.332594 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw52s\" (UniqueName: \"kubernetes.io/projected/727ee9a0-e30b-4915-a405-c68a73d8a6e2-kube-api-access-jw52s\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.386851 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"727ee9a0-e30b-4915-a405-c68a73d8a6e2\") " pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.407179 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-plugins-conf\") pod \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.407497 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-config-data\") pod \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.407515 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.407543 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-plugins\") pod \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.407668 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-erlang-cookie\") pod \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.407721 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-server-conf\") pod \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.407743 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-pod-info\") pod \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.407782 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-erlang-cookie-secret\") pod \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.407812 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-tls\") pod \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.407856 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-confd\") pod \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.407883 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvg2r\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-kube-api-access-wvg2r\") pod \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\" (UID: \"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa\") " Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.410167 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" (UID: "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.410514 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" (UID: "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.411727 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" (UID: "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.418930 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" (UID: "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.437924 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" (UID: "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.439587 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-kube-api-access-wvg2r" (OuterVolumeSpecName: "kube-api-access-wvg2r") pod "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" (UID: "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa"). InnerVolumeSpecName "kube-api-access-wvg2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.443371 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-pod-info" (OuterVolumeSpecName: "pod-info") pod "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" (UID: "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.448998 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" (UID: "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.480762 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-config-data" (OuterVolumeSpecName: "config-data") pod "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" (UID: "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.498962 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-server-conf" (OuterVolumeSpecName: "server-conf") pod "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" (UID: "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.512597 4775 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.512630 4775 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.512639 4775 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.512652 4775 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.512662 4775 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.512669 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvg2r\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-kube-api-access-wvg2r\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.512689 4775 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.512698 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.512723 4775 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.512733 4775 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.531939 4775 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.568463 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" (UID: "58ec8b76-e7fa-4a42-81b5-bdb3d23117fa"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.614434 4775 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.614476 4775 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.664589 4775 generic.go:334] "Generic (PLEG): container finished" podID="58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" containerID="8b428b0b1f29761fa52693fc6473a2f550a49875260b184511dffc6f48656f45" exitCode=0 Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.664623 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa","Type":"ContainerDied","Data":"8b428b0b1f29761fa52693fc6473a2f550a49875260b184511dffc6f48656f45"} Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.664647 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"58ec8b76-e7fa-4a42-81b5-bdb3d23117fa","Type":"ContainerDied","Data":"9a69ea773f120d52e00ec117f79b7c94ab792f1d58edc08640f9840bd71b7dbe"} Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.664677 4775 scope.go:117] "RemoveContainer" containerID="8b428b0b1f29761fa52693fc6473a2f550a49875260b184511dffc6f48656f45" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.664796 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.666808 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.695035 4775 scope.go:117] "RemoveContainer" containerID="70ab4156a953c7a981921faac9a56dd17bc02ce461f66a3d558887ce12c31fec" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.722025 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.732483 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.740933 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 19:55:38 crc kubenswrapper[4775]: E1125 19:55:38.741340 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" containerName="rabbitmq" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.741356 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" containerName="rabbitmq" Nov 25 19:55:38 crc kubenswrapper[4775]: E1125 19:55:38.741368 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" containerName="setup-container" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.741374 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" containerName="setup-container" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.741576 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" containerName="rabbitmq" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.742718 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.750563 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.750583 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.750635 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.750705 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.751434 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.751704 4775 scope.go:117] "RemoveContainer" containerID="8b428b0b1f29761fa52693fc6473a2f550a49875260b184511dffc6f48656f45" Nov 25 19:55:38 crc kubenswrapper[4775]: E1125 19:55:38.752097 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b428b0b1f29761fa52693fc6473a2f550a49875260b184511dffc6f48656f45\": container with ID starting with 8b428b0b1f29761fa52693fc6473a2f550a49875260b184511dffc6f48656f45 not found: ID does not exist" containerID="8b428b0b1f29761fa52693fc6473a2f550a49875260b184511dffc6f48656f45" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.752127 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b428b0b1f29761fa52693fc6473a2f550a49875260b184511dffc6f48656f45"} err="failed to get container status \"8b428b0b1f29761fa52693fc6473a2f550a49875260b184511dffc6f48656f45\": rpc error: code = NotFound desc = could not find container \"8b428b0b1f29761fa52693fc6473a2f550a49875260b184511dffc6f48656f45\": container with ID starting with 8b428b0b1f29761fa52693fc6473a2f550a49875260b184511dffc6f48656f45 not found: ID does not exist" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.752150 4775 scope.go:117] "RemoveContainer" containerID="70ab4156a953c7a981921faac9a56dd17bc02ce461f66a3d558887ce12c31fec" Nov 25 19:55:38 crc kubenswrapper[4775]: E1125 19:55:38.752502 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70ab4156a953c7a981921faac9a56dd17bc02ce461f66a3d558887ce12c31fec\": container with ID starting with 70ab4156a953c7a981921faac9a56dd17bc02ce461f66a3d558887ce12c31fec not found: ID does not exist" containerID="70ab4156a953c7a981921faac9a56dd17bc02ce461f66a3d558887ce12c31fec" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.752518 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ab4156a953c7a981921faac9a56dd17bc02ce461f66a3d558887ce12c31fec"} err="failed to get container status \"70ab4156a953c7a981921faac9a56dd17bc02ce461f66a3d558887ce12c31fec\": rpc error: code = NotFound desc = could not find container \"70ab4156a953c7a981921faac9a56dd17bc02ce461f66a3d558887ce12c31fec\": container with ID starting with 70ab4156a953c7a981921faac9a56dd17bc02ce461f66a3d558887ce12c31fec not found: ID does not exist" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.756042 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.756483 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qsmb9" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.756598 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.858580 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50995ab5-ef22-4466-9906-fab208c9a82d" path="/var/lib/kubelet/pods/50995ab5-ef22-4466-9906-fab208c9a82d/volumes" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.859736 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ec8b76-e7fa-4a42-81b5-bdb3d23117fa" path="/var/lib/kubelet/pods/58ec8b76-e7fa-4a42-81b5-bdb3d23117fa/volumes" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.918943 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g5nt\" (UniqueName: \"kubernetes.io/projected/749c0c26-acc5-490a-9723-b45a341360bf-kube-api-access-8g5nt\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.918983 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/749c0c26-acc5-490a-9723-b45a341360bf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.919013 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/749c0c26-acc5-490a-9723-b45a341360bf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.919034 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.919192 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/749c0c26-acc5-490a-9723-b45a341360bf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.919304 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/749c0c26-acc5-490a-9723-b45a341360bf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.919344 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/749c0c26-acc5-490a-9723-b45a341360bf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.919389 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/749c0c26-acc5-490a-9723-b45a341360bf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.919415 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/749c0c26-acc5-490a-9723-b45a341360bf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.919454 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/749c0c26-acc5-490a-9723-b45a341360bf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:38 crc kubenswrapper[4775]: I1125 19:55:38.919478 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/749c0c26-acc5-490a-9723-b45a341360bf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.021178 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g5nt\" (UniqueName: \"kubernetes.io/projected/749c0c26-acc5-490a-9723-b45a341360bf-kube-api-access-8g5nt\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.021228 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/749c0c26-acc5-490a-9723-b45a341360bf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.021262 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/749c0c26-acc5-490a-9723-b45a341360bf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.021289 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.021327 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/749c0c26-acc5-490a-9723-b45a341360bf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.021366 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/749c0c26-acc5-490a-9723-b45a341360bf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.021385 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/749c0c26-acc5-490a-9723-b45a341360bf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.021416 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/749c0c26-acc5-490a-9723-b45a341360bf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.021433 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/749c0c26-acc5-490a-9723-b45a341360bf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.021457 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/749c0c26-acc5-490a-9723-b45a341360bf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.021473 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/749c0c26-acc5-490a-9723-b45a341360bf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.022277 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.022375 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/749c0c26-acc5-490a-9723-b45a341360bf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.022989 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/749c0c26-acc5-490a-9723-b45a341360bf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.023632 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/749c0c26-acc5-490a-9723-b45a341360bf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.023917 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/749c0c26-acc5-490a-9723-b45a341360bf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.026003 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/749c0c26-acc5-490a-9723-b45a341360bf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.028082 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/749c0c26-acc5-490a-9723-b45a341360bf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.037249 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/749c0c26-acc5-490a-9723-b45a341360bf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.037771 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/749c0c26-acc5-490a-9723-b45a341360bf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.042130 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/749c0c26-acc5-490a-9723-b45a341360bf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.044687 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g5nt\" (UniqueName: \"kubernetes.io/projected/749c0c26-acc5-490a-9723-b45a341360bf-kube-api-access-8g5nt\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.050700 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"749c0c26-acc5-490a-9723-b45a341360bf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.081490 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.149591 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.557347 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 19:55:39 crc kubenswrapper[4775]: W1125 19:55:39.557634 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod749c0c26_acc5_490a_9723_b45a341360bf.slice/crio-0ba45acf7ca3a9c9bbca95197e5d256d4c48c51a9ff8b41cd200f8acc6ab57db WatchSource:0}: Error finding container 0ba45acf7ca3a9c9bbca95197e5d256d4c48c51a9ff8b41cd200f8acc6ab57db: Status 404 returned error can't find the container with id 0ba45acf7ca3a9c9bbca95197e5d256d4c48c51a9ff8b41cd200f8acc6ab57db Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.675024 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"749c0c26-acc5-490a-9723-b45a341360bf","Type":"ContainerStarted","Data":"0ba45acf7ca3a9c9bbca95197e5d256d4c48c51a9ff8b41cd200f8acc6ab57db"} Nov 25 19:55:39 crc kubenswrapper[4775]: I1125 19:55:39.676152 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"727ee9a0-e30b-4915-a405-c68a73d8a6e2","Type":"ContainerStarted","Data":"4cf4204b93bac60e7520a08cfeeda521089d10ded26d8678d580b1723f37c136"} Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.071005 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.071885 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.562240 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-4rmmt"] Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.564006 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.566931 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.577030 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-4rmmt"] Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.686737 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.686818 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-config\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.686922 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf8x2\" (UniqueName: \"kubernetes.io/projected/11ce49d9-f19a-47d5-afd9-7e89820d3400-kube-api-access-qf8x2\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.686943 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.686959 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.686994 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.696178 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"727ee9a0-e30b-4915-a405-c68a73d8a6e2","Type":"ContainerStarted","Data":"4d87a04176a81c27018ff333e7052f839f8eb9a99113c1232ea3def0b0f41d1b"} Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.698103 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"749c0c26-acc5-490a-9723-b45a341360bf","Type":"ContainerStarted","Data":"3722098f52a9b53cf0d09751f506e367a47a5668ee0ecb9e6d348cf62f0c712e"} Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.789082 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf8x2\" (UniqueName: \"kubernetes.io/projected/11ce49d9-f19a-47d5-afd9-7e89820d3400-kube-api-access-qf8x2\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.789424 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.789588 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.789805 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.789995 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.790194 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-config\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.790409 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.790509 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.790625 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.790855 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.790897 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-config\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.820424 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf8x2\" (UniqueName: \"kubernetes.io/projected/11ce49d9-f19a-47d5-afd9-7e89820d3400-kube-api-access-qf8x2\") pod \"dnsmasq-dns-6447ccbd8f-4rmmt\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:41 crc kubenswrapper[4775]: I1125 19:55:41.889479 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:42 crc kubenswrapper[4775]: I1125 19:55:42.401506 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-4rmmt"] Nov 25 19:55:42 crc kubenswrapper[4775]: W1125 19:55:42.403467 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11ce49d9_f19a_47d5_afd9_7e89820d3400.slice/crio-d7dede0c8e8d37f073239738468dc6bf4807b2d6b0088afb71cbbfd077852f34 WatchSource:0}: Error finding container d7dede0c8e8d37f073239738468dc6bf4807b2d6b0088afb71cbbfd077852f34: Status 404 returned error can't find the container with id d7dede0c8e8d37f073239738468dc6bf4807b2d6b0088afb71cbbfd077852f34 Nov 25 19:55:42 crc kubenswrapper[4775]: I1125 19:55:42.708036 4775 generic.go:334] "Generic (PLEG): container finished" podID="11ce49d9-f19a-47d5-afd9-7e89820d3400" containerID="21c930ae616422b9cbcd0ab4a25e83773c9f1c19a58327264e1368a676a26f6d" exitCode=0 Nov 25 19:55:42 crc kubenswrapper[4775]: I1125 19:55:42.708080 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" event={"ID":"11ce49d9-f19a-47d5-afd9-7e89820d3400","Type":"ContainerDied","Data":"21c930ae616422b9cbcd0ab4a25e83773c9f1c19a58327264e1368a676a26f6d"} Nov 25 19:55:42 crc kubenswrapper[4775]: I1125 19:55:42.708420 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" event={"ID":"11ce49d9-f19a-47d5-afd9-7e89820d3400","Type":"ContainerStarted","Data":"d7dede0c8e8d37f073239738468dc6bf4807b2d6b0088afb71cbbfd077852f34"} Nov 25 19:55:43 crc kubenswrapper[4775]: I1125 19:55:43.720121 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" event={"ID":"11ce49d9-f19a-47d5-afd9-7e89820d3400","Type":"ContainerStarted","Data":"4ecec18135c83a106e0314b3e92a58ba0f0d03db7c0324525a79004451eb1dc9"} Nov 25 19:55:43 crc kubenswrapper[4775]: I1125 19:55:43.720596 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:43 crc kubenswrapper[4775]: I1125 19:55:43.742727 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" podStartSLOduration=2.742706305 podStartE2EDuration="2.742706305s" podCreationTimestamp="2025-11-25 19:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:55:43.740591257 +0000 UTC m=+1325.656953663" watchObservedRunningTime="2025-11-25 19:55:43.742706305 +0000 UTC m=+1325.659068671" Nov 25 19:55:51 crc kubenswrapper[4775]: I1125 19:55:51.891906 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:55:51 crc kubenswrapper[4775]: I1125 19:55:51.968952 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-tnbk6"] Nov 25 19:55:51 crc kubenswrapper[4775]: I1125 19:55:51.969463 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" podUID="e38a12cb-af03-43cf-97fb-05f2e5364c82" containerName="dnsmasq-dns" containerID="cri-o://b057c412fb8f961c145d29bd42c78c4094acbb64e66201ddadd6b8ede3da5cca" gracePeriod=10 Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.169034 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-66dcr"] Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.171183 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.197547 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-66dcr"] Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.325155 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb9fc\" (UniqueName: \"kubernetes.io/projected/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-kube-api-access-gb9fc\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.325391 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.325532 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.325846 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.325908 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.325935 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-config\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.428147 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.428310 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.428361 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.428393 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-config\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.428508 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb9fc\" (UniqueName: \"kubernetes.io/projected/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-kube-api-access-gb9fc\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.428589 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.429051 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.429604 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-config\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.432135 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.432839 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.433201 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.443412 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.461338 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb9fc\" (UniqueName: \"kubernetes.io/projected/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-kube-api-access-gb9fc\") pod \"dnsmasq-dns-864d5fc68c-66dcr\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.498117 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.529462 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-ovsdbserver-sb\") pod \"e38a12cb-af03-43cf-97fb-05f2e5364c82\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.529702 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-config\") pod \"e38a12cb-af03-43cf-97fb-05f2e5364c82\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.529804 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45ldb\" (UniqueName: \"kubernetes.io/projected/e38a12cb-af03-43cf-97fb-05f2e5364c82-kube-api-access-45ldb\") pod \"e38a12cb-af03-43cf-97fb-05f2e5364c82\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.529867 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-ovsdbserver-nb\") pod \"e38a12cb-af03-43cf-97fb-05f2e5364c82\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.529986 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-dns-svc\") pod \"e38a12cb-af03-43cf-97fb-05f2e5364c82\" (UID: \"e38a12cb-af03-43cf-97fb-05f2e5364c82\") " Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.534004 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e38a12cb-af03-43cf-97fb-05f2e5364c82-kube-api-access-45ldb" (OuterVolumeSpecName: "kube-api-access-45ldb") pod "e38a12cb-af03-43cf-97fb-05f2e5364c82" (UID: "e38a12cb-af03-43cf-97fb-05f2e5364c82"). InnerVolumeSpecName "kube-api-access-45ldb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.568963 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e38a12cb-af03-43cf-97fb-05f2e5364c82" (UID: "e38a12cb-af03-43cf-97fb-05f2e5364c82"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.574886 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-config" (OuterVolumeSpecName: "config") pod "e38a12cb-af03-43cf-97fb-05f2e5364c82" (UID: "e38a12cb-af03-43cf-97fb-05f2e5364c82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.583328 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e38a12cb-af03-43cf-97fb-05f2e5364c82" (UID: "e38a12cb-af03-43cf-97fb-05f2e5364c82"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.603115 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e38a12cb-af03-43cf-97fb-05f2e5364c82" (UID: "e38a12cb-af03-43cf-97fb-05f2e5364c82"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.633816 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.633850 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.633864 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45ldb\" (UniqueName: \"kubernetes.io/projected/e38a12cb-af03-43cf-97fb-05f2e5364c82-kube-api-access-45ldb\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.633879 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.633893 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e38a12cb-af03-43cf-97fb-05f2e5364c82-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.769092 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-66dcr"] Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.811799 4775 generic.go:334] "Generic (PLEG): container finished" podID="e38a12cb-af03-43cf-97fb-05f2e5364c82" containerID="b057c412fb8f961c145d29bd42c78c4094acbb64e66201ddadd6b8ede3da5cca" exitCode=0 Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.811895 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.811889 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" event={"ID":"e38a12cb-af03-43cf-97fb-05f2e5364c82","Type":"ContainerDied","Data":"b057c412fb8f961c145d29bd42c78c4094acbb64e66201ddadd6b8ede3da5cca"} Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.812028 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-tnbk6" event={"ID":"e38a12cb-af03-43cf-97fb-05f2e5364c82","Type":"ContainerDied","Data":"91eb258b60e43c785990e7504f52e189239a6d14a2bfd09d8c1491233cf1f575"} Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.812056 4775 scope.go:117] "RemoveContainer" containerID="b057c412fb8f961c145d29bd42c78c4094acbb64e66201ddadd6b8ede3da5cca" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.814988 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" event={"ID":"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43","Type":"ContainerStarted","Data":"afae7a127032a10081ca801b6dacb4c0ccf6dae5b059259eeacb5c167bb196ab"} Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.862164 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-tnbk6"] Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.866233 4775 scope.go:117] "RemoveContainer" containerID="9b4d55c4d54ec3a4d786e4fcc4765367d17a0ec3147e880d7ffa78cbd5d1c884" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.867347 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-tnbk6"] Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.894180 4775 scope.go:117] "RemoveContainer" containerID="b057c412fb8f961c145d29bd42c78c4094acbb64e66201ddadd6b8ede3da5cca" Nov 25 19:55:52 crc kubenswrapper[4775]: E1125 19:55:52.894587 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b057c412fb8f961c145d29bd42c78c4094acbb64e66201ddadd6b8ede3da5cca\": container with ID starting with b057c412fb8f961c145d29bd42c78c4094acbb64e66201ddadd6b8ede3da5cca not found: ID does not exist" containerID="b057c412fb8f961c145d29bd42c78c4094acbb64e66201ddadd6b8ede3da5cca" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.894658 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b057c412fb8f961c145d29bd42c78c4094acbb64e66201ddadd6b8ede3da5cca"} err="failed to get container status \"b057c412fb8f961c145d29bd42c78c4094acbb64e66201ddadd6b8ede3da5cca\": rpc error: code = NotFound desc = could not find container \"b057c412fb8f961c145d29bd42c78c4094acbb64e66201ddadd6b8ede3da5cca\": container with ID starting with b057c412fb8f961c145d29bd42c78c4094acbb64e66201ddadd6b8ede3da5cca not found: ID does not exist" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.894685 4775 scope.go:117] "RemoveContainer" containerID="9b4d55c4d54ec3a4d786e4fcc4765367d17a0ec3147e880d7ffa78cbd5d1c884" Nov 25 19:55:52 crc kubenswrapper[4775]: E1125 19:55:52.895089 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b4d55c4d54ec3a4d786e4fcc4765367d17a0ec3147e880d7ffa78cbd5d1c884\": container with ID starting with 9b4d55c4d54ec3a4d786e4fcc4765367d17a0ec3147e880d7ffa78cbd5d1c884 not found: ID does not exist" containerID="9b4d55c4d54ec3a4d786e4fcc4765367d17a0ec3147e880d7ffa78cbd5d1c884" Nov 25 19:55:52 crc kubenswrapper[4775]: I1125 19:55:52.895128 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b4d55c4d54ec3a4d786e4fcc4765367d17a0ec3147e880d7ffa78cbd5d1c884"} err="failed to get container status \"9b4d55c4d54ec3a4d786e4fcc4765367d17a0ec3147e880d7ffa78cbd5d1c884\": rpc error: code = NotFound desc = could not find container \"9b4d55c4d54ec3a4d786e4fcc4765367d17a0ec3147e880d7ffa78cbd5d1c884\": container with ID starting with 9b4d55c4d54ec3a4d786e4fcc4765367d17a0ec3147e880d7ffa78cbd5d1c884 not found: ID does not exist" Nov 25 19:55:53 crc kubenswrapper[4775]: I1125 19:55:53.829084 4775 generic.go:334] "Generic (PLEG): container finished" podID="4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" containerID="eca5f509e9efd8d43c5bcb7e7fb48284cbb105956c9b0179bced62ff143e5838" exitCode=0 Nov 25 19:55:53 crc kubenswrapper[4775]: I1125 19:55:53.829196 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" event={"ID":"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43","Type":"ContainerDied","Data":"eca5f509e9efd8d43c5bcb7e7fb48284cbb105956c9b0179bced62ff143e5838"} Nov 25 19:55:54 crc kubenswrapper[4775]: I1125 19:55:54.872429 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e38a12cb-af03-43cf-97fb-05f2e5364c82" path="/var/lib/kubelet/pods/e38a12cb-af03-43cf-97fb-05f2e5364c82/volumes" Nov 25 19:55:54 crc kubenswrapper[4775]: I1125 19:55:54.875522 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:55:54 crc kubenswrapper[4775]: I1125 19:55:54.875813 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" event={"ID":"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43","Type":"ContainerStarted","Data":"816d1c8739f163dae90f7420e7d7ad349c6b6b56c05545965ebe210bd0f7cf52"} Nov 25 19:55:54 crc kubenswrapper[4775]: I1125 19:55:54.884035 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" podStartSLOduration=2.884010837 podStartE2EDuration="2.884010837s" podCreationTimestamp="2025-11-25 19:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:55:54.877609164 +0000 UTC m=+1336.793971560" watchObservedRunningTime="2025-11-25 19:55:54.884010837 +0000 UTC m=+1336.800373233" Nov 25 19:56:02 crc kubenswrapper[4775]: I1125 19:56:02.500772 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 19:56:02 crc kubenswrapper[4775]: I1125 19:56:02.584107 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-4rmmt"] Nov 25 19:56:02 crc kubenswrapper[4775]: I1125 19:56:02.584378 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" podUID="11ce49d9-f19a-47d5-afd9-7e89820d3400" containerName="dnsmasq-dns" containerID="cri-o://4ecec18135c83a106e0314b3e92a58ba0f0d03db7c0324525a79004451eb1dc9" gracePeriod=10 Nov 25 19:56:02 crc kubenswrapper[4775]: I1125 19:56:02.948082 4775 generic.go:334] "Generic (PLEG): container finished" podID="11ce49d9-f19a-47d5-afd9-7e89820d3400" containerID="4ecec18135c83a106e0314b3e92a58ba0f0d03db7c0324525a79004451eb1dc9" exitCode=0 Nov 25 19:56:02 crc kubenswrapper[4775]: I1125 19:56:02.948145 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" event={"ID":"11ce49d9-f19a-47d5-afd9-7e89820d3400","Type":"ContainerDied","Data":"4ecec18135c83a106e0314b3e92a58ba0f0d03db7c0324525a79004451eb1dc9"} Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.071672 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.165024 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-ovsdbserver-sb\") pod \"11ce49d9-f19a-47d5-afd9-7e89820d3400\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.165111 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-ovsdbserver-nb\") pod \"11ce49d9-f19a-47d5-afd9-7e89820d3400\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.165287 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-openstack-edpm-ipam\") pod \"11ce49d9-f19a-47d5-afd9-7e89820d3400\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.165324 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf8x2\" (UniqueName: \"kubernetes.io/projected/11ce49d9-f19a-47d5-afd9-7e89820d3400-kube-api-access-qf8x2\") pod \"11ce49d9-f19a-47d5-afd9-7e89820d3400\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.165390 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-config\") pod \"11ce49d9-f19a-47d5-afd9-7e89820d3400\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.165418 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-dns-svc\") pod \"11ce49d9-f19a-47d5-afd9-7e89820d3400\" (UID: \"11ce49d9-f19a-47d5-afd9-7e89820d3400\") " Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.170693 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ce49d9-f19a-47d5-afd9-7e89820d3400-kube-api-access-qf8x2" (OuterVolumeSpecName: "kube-api-access-qf8x2") pod "11ce49d9-f19a-47d5-afd9-7e89820d3400" (UID: "11ce49d9-f19a-47d5-afd9-7e89820d3400"). InnerVolumeSpecName "kube-api-access-qf8x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.213370 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11ce49d9-f19a-47d5-afd9-7e89820d3400" (UID: "11ce49d9-f19a-47d5-afd9-7e89820d3400"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.213900 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "11ce49d9-f19a-47d5-afd9-7e89820d3400" (UID: "11ce49d9-f19a-47d5-afd9-7e89820d3400"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.229929 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-config" (OuterVolumeSpecName: "config") pod "11ce49d9-f19a-47d5-afd9-7e89820d3400" (UID: "11ce49d9-f19a-47d5-afd9-7e89820d3400"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.230684 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "11ce49d9-f19a-47d5-afd9-7e89820d3400" (UID: "11ce49d9-f19a-47d5-afd9-7e89820d3400"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.243737 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11ce49d9-f19a-47d5-afd9-7e89820d3400" (UID: "11ce49d9-f19a-47d5-afd9-7e89820d3400"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.267980 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.268011 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.268024 4775 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.268038 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf8x2\" (UniqueName: \"kubernetes.io/projected/11ce49d9-f19a-47d5-afd9-7e89820d3400-kube-api-access-qf8x2\") on node \"crc\" DevicePath \"\"" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.268053 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-config\") on node \"crc\" DevicePath \"\"" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.268066 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11ce49d9-f19a-47d5-afd9-7e89820d3400-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.963609 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" event={"ID":"11ce49d9-f19a-47d5-afd9-7e89820d3400","Type":"ContainerDied","Data":"d7dede0c8e8d37f073239738468dc6bf4807b2d6b0088afb71cbbfd077852f34"} Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.964221 4775 scope.go:117] "RemoveContainer" containerID="4ecec18135c83a106e0314b3e92a58ba0f0d03db7c0324525a79004451eb1dc9" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.964423 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-4rmmt" Nov 25 19:56:03 crc kubenswrapper[4775]: I1125 19:56:03.999614 4775 scope.go:117] "RemoveContainer" containerID="21c930ae616422b9cbcd0ab4a25e83773c9f1c19a58327264e1368a676a26f6d" Nov 25 19:56:04 crc kubenswrapper[4775]: I1125 19:56:04.053156 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-4rmmt"] Nov 25 19:56:04 crc kubenswrapper[4775]: I1125 19:56:04.063939 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-4rmmt"] Nov 25 19:56:04 crc kubenswrapper[4775]: I1125 19:56:04.866328 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11ce49d9-f19a-47d5-afd9-7e89820d3400" path="/var/lib/kubelet/pods/11ce49d9-f19a-47d5-afd9-7e89820d3400/volumes" Nov 25 19:56:11 crc kubenswrapper[4775]: I1125 19:56:11.070083 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:56:11 crc kubenswrapper[4775]: I1125 19:56:11.071109 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.691635 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq"] Nov 25 19:56:12 crc kubenswrapper[4775]: E1125 19:56:12.692701 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e38a12cb-af03-43cf-97fb-05f2e5364c82" containerName="dnsmasq-dns" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.692726 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e38a12cb-af03-43cf-97fb-05f2e5364c82" containerName="dnsmasq-dns" Nov 25 19:56:12 crc kubenswrapper[4775]: E1125 19:56:12.692743 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ce49d9-f19a-47d5-afd9-7e89820d3400" containerName="init" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.692756 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ce49d9-f19a-47d5-afd9-7e89820d3400" containerName="init" Nov 25 19:56:12 crc kubenswrapper[4775]: E1125 19:56:12.692802 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ce49d9-f19a-47d5-afd9-7e89820d3400" containerName="dnsmasq-dns" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.692820 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ce49d9-f19a-47d5-afd9-7e89820d3400" containerName="dnsmasq-dns" Nov 25 19:56:12 crc kubenswrapper[4775]: E1125 19:56:12.692855 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e38a12cb-af03-43cf-97fb-05f2e5364c82" containerName="init" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.692867 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e38a12cb-af03-43cf-97fb-05f2e5364c82" containerName="init" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.693195 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ce49d9-f19a-47d5-afd9-7e89820d3400" containerName="dnsmasq-dns" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.693260 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e38a12cb-af03-43cf-97fb-05f2e5364c82" containerName="dnsmasq-dns" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.694233 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.698136 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.698502 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.698828 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.699145 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.709576 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq"] Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.755837 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t5xp\" (UniqueName: \"kubernetes.io/projected/b015afc2-1bb1-4ea6-8c16-317c1b285470-kube-api-access-8t5xp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.755908 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.756000 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.756186 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.858001 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t5xp\" (UniqueName: \"kubernetes.io/projected/b015afc2-1bb1-4ea6-8c16-317c1b285470-kube-api-access-8t5xp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.858074 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.858149 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.859329 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.865480 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.865510 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.867209 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:12 crc kubenswrapper[4775]: I1125 19:56:12.879592 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t5xp\" (UniqueName: \"kubernetes.io/projected/b015afc2-1bb1-4ea6-8c16-317c1b285470-kube-api-access-8t5xp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:13 crc kubenswrapper[4775]: I1125 19:56:13.074803 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:13 crc kubenswrapper[4775]: I1125 19:56:13.731191 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq"] Nov 25 19:56:13 crc kubenswrapper[4775]: I1125 19:56:13.740127 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 19:56:14 crc kubenswrapper[4775]: I1125 19:56:14.097417 4775 generic.go:334] "Generic (PLEG): container finished" podID="727ee9a0-e30b-4915-a405-c68a73d8a6e2" containerID="4d87a04176a81c27018ff333e7052f839f8eb9a99113c1232ea3def0b0f41d1b" exitCode=0 Nov 25 19:56:14 crc kubenswrapper[4775]: I1125 19:56:14.097840 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"727ee9a0-e30b-4915-a405-c68a73d8a6e2","Type":"ContainerDied","Data":"4d87a04176a81c27018ff333e7052f839f8eb9a99113c1232ea3def0b0f41d1b"} Nov 25 19:56:14 crc kubenswrapper[4775]: I1125 19:56:14.105254 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" event={"ID":"b015afc2-1bb1-4ea6-8c16-317c1b285470","Type":"ContainerStarted","Data":"20417823e15f716195d9ed9b877b629ba3c9e9f6cdda7322266a2f13981cdaf4"} Nov 25 19:56:14 crc kubenswrapper[4775]: I1125 19:56:14.107658 4775 generic.go:334] "Generic (PLEG): container finished" podID="749c0c26-acc5-490a-9723-b45a341360bf" containerID="3722098f52a9b53cf0d09751f506e367a47a5668ee0ecb9e6d348cf62f0c712e" exitCode=0 Nov 25 19:56:14 crc kubenswrapper[4775]: I1125 19:56:14.107694 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"749c0c26-acc5-490a-9723-b45a341360bf","Type":"ContainerDied","Data":"3722098f52a9b53cf0d09751f506e367a47a5668ee0ecb9e6d348cf62f0c712e"} Nov 25 19:56:15 crc kubenswrapper[4775]: I1125 19:56:15.122246 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"749c0c26-acc5-490a-9723-b45a341360bf","Type":"ContainerStarted","Data":"bdf7e6e958da038564cb0b108e72cae4d5bd6003ce6d2071779225eec924d9f5"} Nov 25 19:56:15 crc kubenswrapper[4775]: I1125 19:56:15.123269 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:56:15 crc kubenswrapper[4775]: I1125 19:56:15.124568 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"727ee9a0-e30b-4915-a405-c68a73d8a6e2","Type":"ContainerStarted","Data":"97e34af9af366a7874355a284f1ddaf2b4745f6d4f9e95478a82c662012db2d1"} Nov 25 19:56:15 crc kubenswrapper[4775]: I1125 19:56:15.124765 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 19:56:15 crc kubenswrapper[4775]: I1125 19:56:15.155306 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.15529086 podStartE2EDuration="37.15529086s" podCreationTimestamp="2025-11-25 19:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:56:15.150044838 +0000 UTC m=+1357.066407204" watchObservedRunningTime="2025-11-25 19:56:15.15529086 +0000 UTC m=+1357.071653226" Nov 25 19:56:15 crc kubenswrapper[4775]: I1125 19:56:15.175527 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.175505575 podStartE2EDuration="38.175505575s" podCreationTimestamp="2025-11-25 19:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 19:56:15.173884881 +0000 UTC m=+1357.090247247" watchObservedRunningTime="2025-11-25 19:56:15.175505575 +0000 UTC m=+1357.091867951" Nov 25 19:56:23 crc kubenswrapper[4775]: I1125 19:56:23.214358 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" event={"ID":"b015afc2-1bb1-4ea6-8c16-317c1b285470","Type":"ContainerStarted","Data":"eda68ae99d02cc4ec32cf932d8d403d0503414e63f2f02108c50bc481c1403a6"} Nov 25 19:56:23 crc kubenswrapper[4775]: I1125 19:56:23.252290 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" podStartSLOduration=2.28085452 podStartE2EDuration="11.25226611s" podCreationTimestamp="2025-11-25 19:56:12 +0000 UTC" firstStartedPulling="2025-11-25 19:56:13.739587827 +0000 UTC m=+1355.655950223" lastFinishedPulling="2025-11-25 19:56:22.710999407 +0000 UTC m=+1364.627361813" observedRunningTime="2025-11-25 19:56:23.236593811 +0000 UTC m=+1365.152956257" watchObservedRunningTime="2025-11-25 19:56:23.25226611 +0000 UTC m=+1365.168628516" Nov 25 19:56:28 crc kubenswrapper[4775]: I1125 19:56:28.670937 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 19:56:29 crc kubenswrapper[4775]: I1125 19:56:29.085830 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 19:56:35 crc kubenswrapper[4775]: I1125 19:56:35.348777 4775 generic.go:334] "Generic (PLEG): container finished" podID="b015afc2-1bb1-4ea6-8c16-317c1b285470" containerID="eda68ae99d02cc4ec32cf932d8d403d0503414e63f2f02108c50bc481c1403a6" exitCode=0 Nov 25 19:56:35 crc kubenswrapper[4775]: I1125 19:56:35.348893 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" event={"ID":"b015afc2-1bb1-4ea6-8c16-317c1b285470","Type":"ContainerDied","Data":"eda68ae99d02cc4ec32cf932d8d403d0503414e63f2f02108c50bc481c1403a6"} Nov 25 19:56:36 crc kubenswrapper[4775]: I1125 19:56:36.858014 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:36 crc kubenswrapper[4775]: I1125 19:56:36.955026 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t5xp\" (UniqueName: \"kubernetes.io/projected/b015afc2-1bb1-4ea6-8c16-317c1b285470-kube-api-access-8t5xp\") pod \"b015afc2-1bb1-4ea6-8c16-317c1b285470\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " Nov 25 19:56:36 crc kubenswrapper[4775]: I1125 19:56:36.955226 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-repo-setup-combined-ca-bundle\") pod \"b015afc2-1bb1-4ea6-8c16-317c1b285470\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " Nov 25 19:56:36 crc kubenswrapper[4775]: I1125 19:56:36.955271 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-inventory\") pod \"b015afc2-1bb1-4ea6-8c16-317c1b285470\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " Nov 25 19:56:36 crc kubenswrapper[4775]: I1125 19:56:36.955309 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-ssh-key\") pod \"b015afc2-1bb1-4ea6-8c16-317c1b285470\" (UID: \"b015afc2-1bb1-4ea6-8c16-317c1b285470\") " Nov 25 19:56:36 crc kubenswrapper[4775]: I1125 19:56:36.963106 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b015afc2-1bb1-4ea6-8c16-317c1b285470-kube-api-access-8t5xp" (OuterVolumeSpecName: "kube-api-access-8t5xp") pod "b015afc2-1bb1-4ea6-8c16-317c1b285470" (UID: "b015afc2-1bb1-4ea6-8c16-317c1b285470"). InnerVolumeSpecName "kube-api-access-8t5xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:56:36 crc kubenswrapper[4775]: I1125 19:56:36.963339 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b015afc2-1bb1-4ea6-8c16-317c1b285470" (UID: "b015afc2-1bb1-4ea6-8c16-317c1b285470"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:56:36 crc kubenswrapper[4775]: I1125 19:56:36.991531 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b015afc2-1bb1-4ea6-8c16-317c1b285470" (UID: "b015afc2-1bb1-4ea6-8c16-317c1b285470"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:56:36 crc kubenswrapper[4775]: I1125 19:56:36.996231 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-inventory" (OuterVolumeSpecName: "inventory") pod "b015afc2-1bb1-4ea6-8c16-317c1b285470" (UID: "b015afc2-1bb1-4ea6-8c16-317c1b285470"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.057143 4775 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.057197 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.057240 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b015afc2-1bb1-4ea6-8c16-317c1b285470-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.057258 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t5xp\" (UniqueName: \"kubernetes.io/projected/b015afc2-1bb1-4ea6-8c16-317c1b285470-kube-api-access-8t5xp\") on node \"crc\" DevicePath \"\"" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.376397 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" event={"ID":"b015afc2-1bb1-4ea6-8c16-317c1b285470","Type":"ContainerDied","Data":"20417823e15f716195d9ed9b877b629ba3c9e9f6cdda7322266a2f13981cdaf4"} Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.376458 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20417823e15f716195d9ed9b877b629ba3c9e9f6cdda7322266a2f13981cdaf4" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.376545 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.501089 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972"] Nov 25 19:56:37 crc kubenswrapper[4775]: E1125 19:56:37.501913 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b015afc2-1bb1-4ea6-8c16-317c1b285470" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.501956 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b015afc2-1bb1-4ea6-8c16-317c1b285470" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.502488 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b015afc2-1bb1-4ea6-8c16-317c1b285470" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.503954 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.506716 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.510041 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.510110 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.510329 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.514811 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972"] Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.568019 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf62t\" (UniqueName: \"kubernetes.io/projected/84915a86-089c-4a6e-9890-1758d0912875-kube-api-access-qf62t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk972\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.568310 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk972\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.568445 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk972\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.568616 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk972\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.670715 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf62t\" (UniqueName: \"kubernetes.io/projected/84915a86-089c-4a6e-9890-1758d0912875-kube-api-access-qf62t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk972\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.671068 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk972\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.671278 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk972\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.671614 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk972\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.677081 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk972\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.677704 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk972\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.678712 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk972\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.695964 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf62t\" (UniqueName: \"kubernetes.io/projected/84915a86-089c-4a6e-9890-1758d0912875-kube-api-access-qf62t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk972\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:37 crc kubenswrapper[4775]: I1125 19:56:37.829424 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:56:38 crc kubenswrapper[4775]: W1125 19:56:38.444529 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84915a86_089c_4a6e_9890_1758d0912875.slice/crio-314768799c80fb3545e192b3f22550d82e867455a4058def955397459acec986 WatchSource:0}: Error finding container 314768799c80fb3545e192b3f22550d82e867455a4058def955397459acec986: Status 404 returned error can't find the container with id 314768799c80fb3545e192b3f22550d82e867455a4058def955397459acec986 Nov 25 19:56:38 crc kubenswrapper[4775]: I1125 19:56:38.447701 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972"] Nov 25 19:56:38 crc kubenswrapper[4775]: I1125 19:56:38.898138 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 19:56:39 crc kubenswrapper[4775]: I1125 19:56:39.400522 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" event={"ID":"84915a86-089c-4a6e-9890-1758d0912875","Type":"ContainerStarted","Data":"dc91a2dc8add14b559abd477f7a75a2d4b29e530fdf6c374d3a1022e03c04120"} Nov 25 19:56:39 crc kubenswrapper[4775]: I1125 19:56:39.401002 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" event={"ID":"84915a86-089c-4a6e-9890-1758d0912875","Type":"ContainerStarted","Data":"314768799c80fb3545e192b3f22550d82e867455a4058def955397459acec986"} Nov 25 19:56:39 crc kubenswrapper[4775]: I1125 19:56:39.439686 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" podStartSLOduration=1.994066374 podStartE2EDuration="2.439631037s" podCreationTimestamp="2025-11-25 19:56:37 +0000 UTC" firstStartedPulling="2025-11-25 19:56:38.44977497 +0000 UTC m=+1380.366137336" lastFinishedPulling="2025-11-25 19:56:38.895339613 +0000 UTC m=+1380.811701999" observedRunningTime="2025-11-25 19:56:39.423976269 +0000 UTC m=+1381.340338665" watchObservedRunningTime="2025-11-25 19:56:39.439631037 +0000 UTC m=+1381.355993443" Nov 25 19:56:41 crc kubenswrapper[4775]: I1125 19:56:41.070976 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:56:41 crc kubenswrapper[4775]: I1125 19:56:41.071374 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:56:41 crc kubenswrapper[4775]: I1125 19:56:41.071438 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:56:41 crc kubenswrapper[4775]: I1125 19:56:41.072588 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cca31bd7d1401819a1ce35374bd96d54908cbd0258987317dc941bcce28d9472"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 19:56:41 crc kubenswrapper[4775]: I1125 19:56:41.072718 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://cca31bd7d1401819a1ce35374bd96d54908cbd0258987317dc941bcce28d9472" gracePeriod=600 Nov 25 19:56:41 crc kubenswrapper[4775]: I1125 19:56:41.429274 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="cca31bd7d1401819a1ce35374bd96d54908cbd0258987317dc941bcce28d9472" exitCode=0 Nov 25 19:56:41 crc kubenswrapper[4775]: I1125 19:56:41.429374 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"cca31bd7d1401819a1ce35374bd96d54908cbd0258987317dc941bcce28d9472"} Nov 25 19:56:41 crc kubenswrapper[4775]: I1125 19:56:41.429691 4775 scope.go:117] "RemoveContainer" containerID="9d57ed892e1f28c6ded4ad19e2041e94c1c82f1ac3bd35631b061f8f7717302b" Nov 25 19:56:42 crc kubenswrapper[4775]: I1125 19:56:42.445715 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743"} Nov 25 19:57:48 crc kubenswrapper[4775]: I1125 19:57:48.287769 4775 scope.go:117] "RemoveContainer" containerID="b407b7b65ec6b0c5c625b26dfc6afc952b2c1727f7f6a8cd4cec69e56e8cddb8" Nov 25 19:58:41 crc kubenswrapper[4775]: I1125 19:58:41.070593 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:58:41 crc kubenswrapper[4775]: I1125 19:58:41.071217 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:58:48 crc kubenswrapper[4775]: I1125 19:58:48.400683 4775 scope.go:117] "RemoveContainer" containerID="e5105b81237ebcc4650e85e6a960fe98ba0a0a7ae3611d067275117aef61b711" Nov 25 19:59:11 crc kubenswrapper[4775]: I1125 19:59:11.070489 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:59:11 crc kubenswrapper[4775]: I1125 19:59:11.071196 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:59:41 crc kubenswrapper[4775]: I1125 19:59:41.069995 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 19:59:41 crc kubenswrapper[4775]: I1125 19:59:41.070608 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 19:59:41 crc kubenswrapper[4775]: I1125 19:59:41.070694 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 19:59:41 crc kubenswrapper[4775]: I1125 19:59:41.071693 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 19:59:41 crc kubenswrapper[4775]: I1125 19:59:41.071797 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" gracePeriod=600 Nov 25 19:59:41 crc kubenswrapper[4775]: E1125 19:59:41.219087 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 19:59:41 crc kubenswrapper[4775]: I1125 19:59:41.565310 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" exitCode=0 Nov 25 19:59:41 crc kubenswrapper[4775]: I1125 19:59:41.565358 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743"} Nov 25 19:59:41 crc kubenswrapper[4775]: I1125 19:59:41.565444 4775 scope.go:117] "RemoveContainer" containerID="cca31bd7d1401819a1ce35374bd96d54908cbd0258987317dc941bcce28d9472" Nov 25 19:59:41 crc kubenswrapper[4775]: I1125 19:59:41.566947 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 19:59:41 crc kubenswrapper[4775]: E1125 19:59:41.567525 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 19:59:52 crc kubenswrapper[4775]: I1125 19:59:52.847972 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 19:59:52 crc kubenswrapper[4775]: E1125 19:59:52.848860 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 19:59:54 crc kubenswrapper[4775]: I1125 19:59:54.886393 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sn7l4"] Nov 25 19:59:54 crc kubenswrapper[4775]: I1125 19:59:54.890235 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn7l4" Nov 25 19:59:54 crc kubenswrapper[4775]: I1125 19:59:54.891609 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sn7l4"] Nov 25 19:59:54 crc kubenswrapper[4775]: I1125 19:59:54.988069 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4flc\" (UniqueName: \"kubernetes.io/projected/ffcd59af-0c47-47f0-84e6-743efe7a439b-kube-api-access-g4flc\") pod \"community-operators-sn7l4\" (UID: \"ffcd59af-0c47-47f0-84e6-743efe7a439b\") " pod="openshift-marketplace/community-operators-sn7l4" Nov 25 19:59:54 crc kubenswrapper[4775]: I1125 19:59:54.988387 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffcd59af-0c47-47f0-84e6-743efe7a439b-utilities\") pod \"community-operators-sn7l4\" (UID: \"ffcd59af-0c47-47f0-84e6-743efe7a439b\") " pod="openshift-marketplace/community-operators-sn7l4" Nov 25 19:59:54 crc kubenswrapper[4775]: I1125 19:59:54.988540 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffcd59af-0c47-47f0-84e6-743efe7a439b-catalog-content\") pod \"community-operators-sn7l4\" (UID: \"ffcd59af-0c47-47f0-84e6-743efe7a439b\") " pod="openshift-marketplace/community-operators-sn7l4" Nov 25 19:59:55 crc kubenswrapper[4775]: I1125 19:59:55.090136 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4flc\" (UniqueName: \"kubernetes.io/projected/ffcd59af-0c47-47f0-84e6-743efe7a439b-kube-api-access-g4flc\") pod \"community-operators-sn7l4\" (UID: \"ffcd59af-0c47-47f0-84e6-743efe7a439b\") " pod="openshift-marketplace/community-operators-sn7l4" Nov 25 19:59:55 crc kubenswrapper[4775]: I1125 19:59:55.090778 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffcd59af-0c47-47f0-84e6-743efe7a439b-utilities\") pod \"community-operators-sn7l4\" (UID: \"ffcd59af-0c47-47f0-84e6-743efe7a439b\") " pod="openshift-marketplace/community-operators-sn7l4" Nov 25 19:59:55 crc kubenswrapper[4775]: I1125 19:59:55.091482 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffcd59af-0c47-47f0-84e6-743efe7a439b-utilities\") pod \"community-operators-sn7l4\" (UID: \"ffcd59af-0c47-47f0-84e6-743efe7a439b\") " pod="openshift-marketplace/community-operators-sn7l4" Nov 25 19:59:55 crc kubenswrapper[4775]: I1125 19:59:55.091747 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffcd59af-0c47-47f0-84e6-743efe7a439b-catalog-content\") pod \"community-operators-sn7l4\" (UID: \"ffcd59af-0c47-47f0-84e6-743efe7a439b\") " pod="openshift-marketplace/community-operators-sn7l4" Nov 25 19:59:55 crc kubenswrapper[4775]: I1125 19:59:55.092201 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffcd59af-0c47-47f0-84e6-743efe7a439b-catalog-content\") pod \"community-operators-sn7l4\" (UID: \"ffcd59af-0c47-47f0-84e6-743efe7a439b\") " pod="openshift-marketplace/community-operators-sn7l4" Nov 25 19:59:55 crc kubenswrapper[4775]: I1125 19:59:55.116100 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4flc\" (UniqueName: \"kubernetes.io/projected/ffcd59af-0c47-47f0-84e6-743efe7a439b-kube-api-access-g4flc\") pod \"community-operators-sn7l4\" (UID: \"ffcd59af-0c47-47f0-84e6-743efe7a439b\") " pod="openshift-marketplace/community-operators-sn7l4" Nov 25 19:59:55 crc kubenswrapper[4775]: I1125 19:59:55.221170 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn7l4" Nov 25 19:59:55 crc kubenswrapper[4775]: I1125 19:59:55.722735 4775 generic.go:334] "Generic (PLEG): container finished" podID="84915a86-089c-4a6e-9890-1758d0912875" containerID="dc91a2dc8add14b559abd477f7a75a2d4b29e530fdf6c374d3a1022e03c04120" exitCode=0 Nov 25 19:59:55 crc kubenswrapper[4775]: I1125 19:59:55.722833 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" event={"ID":"84915a86-089c-4a6e-9890-1758d0912875","Type":"ContainerDied","Data":"dc91a2dc8add14b559abd477f7a75a2d4b29e530fdf6c374d3a1022e03c04120"} Nov 25 19:59:55 crc kubenswrapper[4775]: I1125 19:59:55.778874 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sn7l4"] Nov 25 19:59:56 crc kubenswrapper[4775]: I1125 19:59:56.736437 4775 generic.go:334] "Generic (PLEG): container finished" podID="ffcd59af-0c47-47f0-84e6-743efe7a439b" containerID="d234ae618a6c5bdf0a1b154ea0149ed70af2b71c892c8433f0bab106b1be5f4a" exitCode=0 Nov 25 19:59:56 crc kubenswrapper[4775]: I1125 19:59:56.736518 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn7l4" event={"ID":"ffcd59af-0c47-47f0-84e6-743efe7a439b","Type":"ContainerDied","Data":"d234ae618a6c5bdf0a1b154ea0149ed70af2b71c892c8433f0bab106b1be5f4a"} Nov 25 19:59:56 crc kubenswrapper[4775]: I1125 19:59:56.736881 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn7l4" event={"ID":"ffcd59af-0c47-47f0-84e6-743efe7a439b","Type":"ContainerStarted","Data":"28311920d1507523784efb49e9679e0e3d91cd920bc908b93272d2d731cb2bc0"} Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.209214 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.336801 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-bootstrap-combined-ca-bundle\") pod \"84915a86-089c-4a6e-9890-1758d0912875\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.336946 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf62t\" (UniqueName: \"kubernetes.io/projected/84915a86-089c-4a6e-9890-1758d0912875-kube-api-access-qf62t\") pod \"84915a86-089c-4a6e-9890-1758d0912875\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.337043 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-inventory\") pod \"84915a86-089c-4a6e-9890-1758d0912875\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.337142 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-ssh-key\") pod \"84915a86-089c-4a6e-9890-1758d0912875\" (UID: \"84915a86-089c-4a6e-9890-1758d0912875\") " Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.345803 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84915a86-089c-4a6e-9890-1758d0912875-kube-api-access-qf62t" (OuterVolumeSpecName: "kube-api-access-qf62t") pod "84915a86-089c-4a6e-9890-1758d0912875" (UID: "84915a86-089c-4a6e-9890-1758d0912875"). InnerVolumeSpecName "kube-api-access-qf62t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.346107 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "84915a86-089c-4a6e-9890-1758d0912875" (UID: "84915a86-089c-4a6e-9890-1758d0912875"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.390737 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-inventory" (OuterVolumeSpecName: "inventory") pod "84915a86-089c-4a6e-9890-1758d0912875" (UID: "84915a86-089c-4a6e-9890-1758d0912875"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.403103 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "84915a86-089c-4a6e-9890-1758d0912875" (UID: "84915a86-089c-4a6e-9890-1758d0912875"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.438924 4775 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.439200 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf62t\" (UniqueName: \"kubernetes.io/projected/84915a86-089c-4a6e-9890-1758d0912875-kube-api-access-qf62t\") on node \"crc\" DevicePath \"\"" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.439302 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.439387 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84915a86-089c-4a6e-9890-1758d0912875-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.748316 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" event={"ID":"84915a86-089c-4a6e-9890-1758d0912875","Type":"ContainerDied","Data":"314768799c80fb3545e192b3f22550d82e867455a4058def955397459acec986"} Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.748684 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="314768799c80fb3545e192b3f22550d82e867455a4058def955397459acec986" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.748402 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.847283 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd"] Nov 25 19:59:57 crc kubenswrapper[4775]: E1125 19:59:57.848230 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84915a86-089c-4a6e-9890-1758d0912875" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.848250 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="84915a86-089c-4a6e-9890-1758d0912875" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.848427 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="84915a86-089c-4a6e-9890-1758d0912875" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.849012 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.851989 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.856239 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.857169 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.857389 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.857538 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd"] Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.959249 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f4f2\" (UniqueName: \"kubernetes.io/projected/695faa37-300a-4e2a-b516-a5053d5663dc-kube-api-access-4f4f2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vclbd\" (UID: \"695faa37-300a-4e2a-b516-a5053d5663dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.959588 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/695faa37-300a-4e2a-b516-a5053d5663dc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vclbd\" (UID: \"695faa37-300a-4e2a-b516-a5053d5663dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" Nov 25 19:59:57 crc kubenswrapper[4775]: I1125 19:59:57.959839 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/695faa37-300a-4e2a-b516-a5053d5663dc-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vclbd\" (UID: \"695faa37-300a-4e2a-b516-a5053d5663dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" Nov 25 19:59:58 crc kubenswrapper[4775]: I1125 19:59:58.061677 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/695faa37-300a-4e2a-b516-a5053d5663dc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vclbd\" (UID: \"695faa37-300a-4e2a-b516-a5053d5663dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" Nov 25 19:59:58 crc kubenswrapper[4775]: I1125 19:59:58.061789 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/695faa37-300a-4e2a-b516-a5053d5663dc-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vclbd\" (UID: \"695faa37-300a-4e2a-b516-a5053d5663dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" Nov 25 19:59:58 crc kubenswrapper[4775]: I1125 19:59:58.061835 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f4f2\" (UniqueName: \"kubernetes.io/projected/695faa37-300a-4e2a-b516-a5053d5663dc-kube-api-access-4f4f2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vclbd\" (UID: \"695faa37-300a-4e2a-b516-a5053d5663dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" Nov 25 19:59:58 crc kubenswrapper[4775]: I1125 19:59:58.065630 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/695faa37-300a-4e2a-b516-a5053d5663dc-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vclbd\" (UID: \"695faa37-300a-4e2a-b516-a5053d5663dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" Nov 25 19:59:58 crc kubenswrapper[4775]: I1125 19:59:58.066174 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/695faa37-300a-4e2a-b516-a5053d5663dc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vclbd\" (UID: \"695faa37-300a-4e2a-b516-a5053d5663dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" Nov 25 19:59:58 crc kubenswrapper[4775]: I1125 19:59:58.081117 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f4f2\" (UniqueName: \"kubernetes.io/projected/695faa37-300a-4e2a-b516-a5053d5663dc-kube-api-access-4f4f2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vclbd\" (UID: \"695faa37-300a-4e2a-b516-a5053d5663dc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" Nov 25 19:59:58 crc kubenswrapper[4775]: I1125 19:59:58.169432 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" Nov 25 19:59:58 crc kubenswrapper[4775]: I1125 19:59:58.736214 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd"] Nov 25 19:59:58 crc kubenswrapper[4775]: W1125 19:59:58.738941 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod695faa37_300a_4e2a_b516_a5053d5663dc.slice/crio-bf58911c017c1357618ff5bbc13fd7a6f71234d93d8c3e0da4cc3017da57c18e WatchSource:0}: Error finding container bf58911c017c1357618ff5bbc13fd7a6f71234d93d8c3e0da4cc3017da57c18e: Status 404 returned error can't find the container with id bf58911c017c1357618ff5bbc13fd7a6f71234d93d8c3e0da4cc3017da57c18e Nov 25 19:59:58 crc kubenswrapper[4775]: I1125 19:59:58.758808 4775 generic.go:334] "Generic (PLEG): container finished" podID="ffcd59af-0c47-47f0-84e6-743efe7a439b" containerID="7f5b624b3af506534b5e7c59c068d101c208e3339ff54e2cba22cb8b9d272371" exitCode=0 Nov 25 19:59:58 crc kubenswrapper[4775]: I1125 19:59:58.758884 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn7l4" event={"ID":"ffcd59af-0c47-47f0-84e6-743efe7a439b","Type":"ContainerDied","Data":"7f5b624b3af506534b5e7c59c068d101c208e3339ff54e2cba22cb8b9d272371"} Nov 25 19:59:58 crc kubenswrapper[4775]: I1125 19:59:58.761464 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" event={"ID":"695faa37-300a-4e2a-b516-a5053d5663dc","Type":"ContainerStarted","Data":"bf58911c017c1357618ff5bbc13fd7a6f71234d93d8c3e0da4cc3017da57c18e"} Nov 25 19:59:59 crc kubenswrapper[4775]: I1125 19:59:59.770585 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" event={"ID":"695faa37-300a-4e2a-b516-a5053d5663dc","Type":"ContainerStarted","Data":"a5492b7d3899805941f7086d746b8e0490a39870dd419db30a7dd3eb0373c9d3"} Nov 25 19:59:59 crc kubenswrapper[4775]: I1125 19:59:59.778706 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn7l4" event={"ID":"ffcd59af-0c47-47f0-84e6-743efe7a439b","Type":"ContainerStarted","Data":"747452bc9bec37412d2de07d1ea16675011b2d404920f331f9c17d0931899ddb"} Nov 25 19:59:59 crc kubenswrapper[4775]: I1125 19:59:59.804958 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" podStartSLOduration=2.326678148 podStartE2EDuration="2.804932047s" podCreationTimestamp="2025-11-25 19:59:57 +0000 UTC" firstStartedPulling="2025-11-25 19:59:58.743783143 +0000 UTC m=+1580.660145519" lastFinishedPulling="2025-11-25 19:59:59.222037042 +0000 UTC m=+1581.138399418" observedRunningTime="2025-11-25 19:59:59.79643271 +0000 UTC m=+1581.712795106" watchObservedRunningTime="2025-11-25 19:59:59.804932047 +0000 UTC m=+1581.721294463" Nov 25 19:59:59 crc kubenswrapper[4775]: I1125 19:59:59.825857 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sn7l4" podStartSLOduration=3.325720366 podStartE2EDuration="5.825842816s" podCreationTimestamp="2025-11-25 19:59:54 +0000 UTC" firstStartedPulling="2025-11-25 19:59:56.738985458 +0000 UTC m=+1578.655347854" lastFinishedPulling="2025-11-25 19:59:59.239107898 +0000 UTC m=+1581.155470304" observedRunningTime="2025-11-25 19:59:59.816135147 +0000 UTC m=+1581.732497543" watchObservedRunningTime="2025-11-25 19:59:59.825842816 +0000 UTC m=+1581.742205172" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.156224 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2"] Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.158078 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.170199 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2"] Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.190348 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.190727 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.323772 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzxcx\" (UniqueName: \"kubernetes.io/projected/0ced7361-4485-43e8-b942-4417fb168b44-kube-api-access-xzxcx\") pod \"collect-profiles-29401680-btws2\" (UID: \"0ced7361-4485-43e8-b942-4417fb168b44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.324052 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ced7361-4485-43e8-b942-4417fb168b44-config-volume\") pod \"collect-profiles-29401680-btws2\" (UID: \"0ced7361-4485-43e8-b942-4417fb168b44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.324178 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ced7361-4485-43e8-b942-4417fb168b44-secret-volume\") pod \"collect-profiles-29401680-btws2\" (UID: \"0ced7361-4485-43e8-b942-4417fb168b44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.426625 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzxcx\" (UniqueName: \"kubernetes.io/projected/0ced7361-4485-43e8-b942-4417fb168b44-kube-api-access-xzxcx\") pod \"collect-profiles-29401680-btws2\" (UID: \"0ced7361-4485-43e8-b942-4417fb168b44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.427213 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ced7361-4485-43e8-b942-4417fb168b44-config-volume\") pod \"collect-profiles-29401680-btws2\" (UID: \"0ced7361-4485-43e8-b942-4417fb168b44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.427316 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ced7361-4485-43e8-b942-4417fb168b44-secret-volume\") pod \"collect-profiles-29401680-btws2\" (UID: \"0ced7361-4485-43e8-b942-4417fb168b44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.428733 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ced7361-4485-43e8-b942-4417fb168b44-config-volume\") pod \"collect-profiles-29401680-btws2\" (UID: \"0ced7361-4485-43e8-b942-4417fb168b44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.436423 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ced7361-4485-43e8-b942-4417fb168b44-secret-volume\") pod \"collect-profiles-29401680-btws2\" (UID: \"0ced7361-4485-43e8-b942-4417fb168b44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.455229 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzxcx\" (UniqueName: \"kubernetes.io/projected/0ced7361-4485-43e8-b942-4417fb168b44-kube-api-access-xzxcx\") pod \"collect-profiles-29401680-btws2\" (UID: \"0ced7361-4485-43e8-b942-4417fb168b44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.505625 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" Nov 25 20:00:00 crc kubenswrapper[4775]: I1125 20:00:00.945483 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2"] Nov 25 20:00:01 crc kubenswrapper[4775]: I1125 20:00:01.801798 4775 generic.go:334] "Generic (PLEG): container finished" podID="0ced7361-4485-43e8-b942-4417fb168b44" containerID="bdf304c98187bfb77c74220ebd64d0c4713fe3fd2acfaf78ced4411b6e900c50" exitCode=0 Nov 25 20:00:01 crc kubenswrapper[4775]: I1125 20:00:01.801865 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" event={"ID":"0ced7361-4485-43e8-b942-4417fb168b44","Type":"ContainerDied","Data":"bdf304c98187bfb77c74220ebd64d0c4713fe3fd2acfaf78ced4411b6e900c50"} Nov 25 20:00:01 crc kubenswrapper[4775]: I1125 20:00:01.801940 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" event={"ID":"0ced7361-4485-43e8-b942-4417fb168b44","Type":"ContainerStarted","Data":"6ecfb299f9f1baa93e9b66cc0475a02d26782aeb8d24a079444cb9effc04aa7b"} Nov 25 20:00:03 crc kubenswrapper[4775]: I1125 20:00:03.273879 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" Nov 25 20:00:03 crc kubenswrapper[4775]: I1125 20:00:03.408240 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ced7361-4485-43e8-b942-4417fb168b44-config-volume\") pod \"0ced7361-4485-43e8-b942-4417fb168b44\" (UID: \"0ced7361-4485-43e8-b942-4417fb168b44\") " Nov 25 20:00:03 crc kubenswrapper[4775]: I1125 20:00:03.408812 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ced7361-4485-43e8-b942-4417fb168b44-secret-volume\") pod \"0ced7361-4485-43e8-b942-4417fb168b44\" (UID: \"0ced7361-4485-43e8-b942-4417fb168b44\") " Nov 25 20:00:03 crc kubenswrapper[4775]: I1125 20:00:03.408915 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzxcx\" (UniqueName: \"kubernetes.io/projected/0ced7361-4485-43e8-b942-4417fb168b44-kube-api-access-xzxcx\") pod \"0ced7361-4485-43e8-b942-4417fb168b44\" (UID: \"0ced7361-4485-43e8-b942-4417fb168b44\") " Nov 25 20:00:03 crc kubenswrapper[4775]: I1125 20:00:03.410032 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ced7361-4485-43e8-b942-4417fb168b44-config-volume" (OuterVolumeSpecName: "config-volume") pod "0ced7361-4485-43e8-b942-4417fb168b44" (UID: "0ced7361-4485-43e8-b942-4417fb168b44"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:00:03 crc kubenswrapper[4775]: I1125 20:00:03.416512 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ced7361-4485-43e8-b942-4417fb168b44-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0ced7361-4485-43e8-b942-4417fb168b44" (UID: "0ced7361-4485-43e8-b942-4417fb168b44"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:00:03 crc kubenswrapper[4775]: I1125 20:00:03.416765 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ced7361-4485-43e8-b942-4417fb168b44-kube-api-access-xzxcx" (OuterVolumeSpecName: "kube-api-access-xzxcx") pod "0ced7361-4485-43e8-b942-4417fb168b44" (UID: "0ced7361-4485-43e8-b942-4417fb168b44"). InnerVolumeSpecName "kube-api-access-xzxcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:00:03 crc kubenswrapper[4775]: I1125 20:00:03.510730 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ced7361-4485-43e8-b942-4417fb168b44-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:00:03 crc kubenswrapper[4775]: I1125 20:00:03.510763 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ced7361-4485-43e8-b942-4417fb168b44-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:00:03 crc kubenswrapper[4775]: I1125 20:00:03.510775 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzxcx\" (UniqueName: \"kubernetes.io/projected/0ced7361-4485-43e8-b942-4417fb168b44-kube-api-access-xzxcx\") on node \"crc\" DevicePath \"\"" Nov 25 20:00:03 crc kubenswrapper[4775]: I1125 20:00:03.830732 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" event={"ID":"0ced7361-4485-43e8-b942-4417fb168b44","Type":"ContainerDied","Data":"6ecfb299f9f1baa93e9b66cc0475a02d26782aeb8d24a079444cb9effc04aa7b"} Nov 25 20:00:03 crc kubenswrapper[4775]: I1125 20:00:03.830786 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ecfb299f9f1baa93e9b66cc0475a02d26782aeb8d24a079444cb9effc04aa7b" Nov 25 20:00:03 crc kubenswrapper[4775]: I1125 20:00:03.830826 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2" Nov 25 20:00:05 crc kubenswrapper[4775]: I1125 20:00:05.221963 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sn7l4" Nov 25 20:00:05 crc kubenswrapper[4775]: I1125 20:00:05.222426 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sn7l4" Nov 25 20:00:06 crc kubenswrapper[4775]: I1125 20:00:06.290615 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sn7l4" podUID="ffcd59af-0c47-47f0-84e6-743efe7a439b" containerName="registry-server" probeResult="failure" output=< Nov 25 20:00:06 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Nov 25 20:00:06 crc kubenswrapper[4775]: > Nov 25 20:00:07 crc kubenswrapper[4775]: I1125 20:00:07.846983 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:00:07 crc kubenswrapper[4775]: E1125 20:00:07.847413 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:00:15 crc kubenswrapper[4775]: I1125 20:00:15.300380 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sn7l4" Nov 25 20:00:15 crc kubenswrapper[4775]: I1125 20:00:15.386098 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sn7l4" Nov 25 20:00:15 crc kubenswrapper[4775]: I1125 20:00:15.551776 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sn7l4"] Nov 25 20:00:16 crc kubenswrapper[4775]: I1125 20:00:16.995118 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sn7l4" podUID="ffcd59af-0c47-47f0-84e6-743efe7a439b" containerName="registry-server" containerID="cri-o://747452bc9bec37412d2de07d1ea16675011b2d404920f331f9c17d0931899ddb" gracePeriod=2 Nov 25 20:00:17 crc kubenswrapper[4775]: I1125 20:00:17.982942 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn7l4" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.008530 4775 generic.go:334] "Generic (PLEG): container finished" podID="ffcd59af-0c47-47f0-84e6-743efe7a439b" containerID="747452bc9bec37412d2de07d1ea16675011b2d404920f331f9c17d0931899ddb" exitCode=0 Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.008588 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn7l4" event={"ID":"ffcd59af-0c47-47f0-84e6-743efe7a439b","Type":"ContainerDied","Data":"747452bc9bec37412d2de07d1ea16675011b2d404920f331f9c17d0931899ddb"} Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.008606 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn7l4" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.008674 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn7l4" event={"ID":"ffcd59af-0c47-47f0-84e6-743efe7a439b","Type":"ContainerDied","Data":"28311920d1507523784efb49e9679e0e3d91cd920bc908b93272d2d731cb2bc0"} Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.008704 4775 scope.go:117] "RemoveContainer" containerID="747452bc9bec37412d2de07d1ea16675011b2d404920f331f9c17d0931899ddb" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.030300 4775 scope.go:117] "RemoveContainer" containerID="7f5b624b3af506534b5e7c59c068d101c208e3339ff54e2cba22cb8b9d272371" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.053520 4775 scope.go:117] "RemoveContainer" containerID="d234ae618a6c5bdf0a1b154ea0149ed70af2b71c892c8433f0bab106b1be5f4a" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.110884 4775 scope.go:117] "RemoveContainer" containerID="747452bc9bec37412d2de07d1ea16675011b2d404920f331f9c17d0931899ddb" Nov 25 20:00:18 crc kubenswrapper[4775]: E1125 20:00:18.111523 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"747452bc9bec37412d2de07d1ea16675011b2d404920f331f9c17d0931899ddb\": container with ID starting with 747452bc9bec37412d2de07d1ea16675011b2d404920f331f9c17d0931899ddb not found: ID does not exist" containerID="747452bc9bec37412d2de07d1ea16675011b2d404920f331f9c17d0931899ddb" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.111554 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747452bc9bec37412d2de07d1ea16675011b2d404920f331f9c17d0931899ddb"} err="failed to get container status \"747452bc9bec37412d2de07d1ea16675011b2d404920f331f9c17d0931899ddb\": rpc error: code = NotFound desc = could not find container \"747452bc9bec37412d2de07d1ea16675011b2d404920f331f9c17d0931899ddb\": container with ID starting with 747452bc9bec37412d2de07d1ea16675011b2d404920f331f9c17d0931899ddb not found: ID does not exist" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.111574 4775 scope.go:117] "RemoveContainer" containerID="7f5b624b3af506534b5e7c59c068d101c208e3339ff54e2cba22cb8b9d272371" Nov 25 20:00:18 crc kubenswrapper[4775]: E1125 20:00:18.112088 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f5b624b3af506534b5e7c59c068d101c208e3339ff54e2cba22cb8b9d272371\": container with ID starting with 7f5b624b3af506534b5e7c59c068d101c208e3339ff54e2cba22cb8b9d272371 not found: ID does not exist" containerID="7f5b624b3af506534b5e7c59c068d101c208e3339ff54e2cba22cb8b9d272371" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.112107 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5b624b3af506534b5e7c59c068d101c208e3339ff54e2cba22cb8b9d272371"} err="failed to get container status \"7f5b624b3af506534b5e7c59c068d101c208e3339ff54e2cba22cb8b9d272371\": rpc error: code = NotFound desc = could not find container \"7f5b624b3af506534b5e7c59c068d101c208e3339ff54e2cba22cb8b9d272371\": container with ID starting with 7f5b624b3af506534b5e7c59c068d101c208e3339ff54e2cba22cb8b9d272371 not found: ID does not exist" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.112120 4775 scope.go:117] "RemoveContainer" containerID="d234ae618a6c5bdf0a1b154ea0149ed70af2b71c892c8433f0bab106b1be5f4a" Nov 25 20:00:18 crc kubenswrapper[4775]: E1125 20:00:18.112379 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d234ae618a6c5bdf0a1b154ea0149ed70af2b71c892c8433f0bab106b1be5f4a\": container with ID starting with d234ae618a6c5bdf0a1b154ea0149ed70af2b71c892c8433f0bab106b1be5f4a not found: ID does not exist" containerID="d234ae618a6c5bdf0a1b154ea0149ed70af2b71c892c8433f0bab106b1be5f4a" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.112395 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d234ae618a6c5bdf0a1b154ea0149ed70af2b71c892c8433f0bab106b1be5f4a"} err="failed to get container status \"d234ae618a6c5bdf0a1b154ea0149ed70af2b71c892c8433f0bab106b1be5f4a\": rpc error: code = NotFound desc = could not find container \"d234ae618a6c5bdf0a1b154ea0149ed70af2b71c892c8433f0bab106b1be5f4a\": container with ID starting with d234ae618a6c5bdf0a1b154ea0149ed70af2b71c892c8433f0bab106b1be5f4a not found: ID does not exist" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.143588 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffcd59af-0c47-47f0-84e6-743efe7a439b-utilities\") pod \"ffcd59af-0c47-47f0-84e6-743efe7a439b\" (UID: \"ffcd59af-0c47-47f0-84e6-743efe7a439b\") " Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.143697 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffcd59af-0c47-47f0-84e6-743efe7a439b-catalog-content\") pod \"ffcd59af-0c47-47f0-84e6-743efe7a439b\" (UID: \"ffcd59af-0c47-47f0-84e6-743efe7a439b\") " Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.143837 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4flc\" (UniqueName: \"kubernetes.io/projected/ffcd59af-0c47-47f0-84e6-743efe7a439b-kube-api-access-g4flc\") pod \"ffcd59af-0c47-47f0-84e6-743efe7a439b\" (UID: \"ffcd59af-0c47-47f0-84e6-743efe7a439b\") " Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.145718 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffcd59af-0c47-47f0-84e6-743efe7a439b-utilities" (OuterVolumeSpecName: "utilities") pod "ffcd59af-0c47-47f0-84e6-743efe7a439b" (UID: "ffcd59af-0c47-47f0-84e6-743efe7a439b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.150581 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcd59af-0c47-47f0-84e6-743efe7a439b-kube-api-access-g4flc" (OuterVolumeSpecName: "kube-api-access-g4flc") pod "ffcd59af-0c47-47f0-84e6-743efe7a439b" (UID: "ffcd59af-0c47-47f0-84e6-743efe7a439b"). InnerVolumeSpecName "kube-api-access-g4flc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.191577 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffcd59af-0c47-47f0-84e6-743efe7a439b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffcd59af-0c47-47f0-84e6-743efe7a439b" (UID: "ffcd59af-0c47-47f0-84e6-743efe7a439b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.247205 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4flc\" (UniqueName: \"kubernetes.io/projected/ffcd59af-0c47-47f0-84e6-743efe7a439b-kube-api-access-g4flc\") on node \"crc\" DevicePath \"\"" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.247461 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffcd59af-0c47-47f0-84e6-743efe7a439b-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.247473 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffcd59af-0c47-47f0-84e6-743efe7a439b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.338062 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sn7l4"] Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.347170 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sn7l4"] Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.859082 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:00:18 crc kubenswrapper[4775]: E1125 20:00:18.859477 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:00:18 crc kubenswrapper[4775]: I1125 20:00:18.874073 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffcd59af-0c47-47f0-84e6-743efe7a439b" path="/var/lib/kubelet/pods/ffcd59af-0c47-47f0-84e6-743efe7a439b/volumes" Nov 25 20:00:30 crc kubenswrapper[4775]: I1125 20:00:30.849753 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:00:30 crc kubenswrapper[4775]: E1125 20:00:30.852021 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:00:44 crc kubenswrapper[4775]: I1125 20:00:44.847344 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:00:44 crc kubenswrapper[4775]: E1125 20:00:44.848621 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:00:48 crc kubenswrapper[4775]: I1125 20:00:48.524116 4775 scope.go:117] "RemoveContainer" containerID="658fd9e83c16e2ff23cc0e046d24da0d467a9783638e7bc82bab4a8fc0f81ace" Nov 25 20:00:48 crc kubenswrapper[4775]: I1125 20:00:48.546521 4775 scope.go:117] "RemoveContainer" containerID="c66b95a042c094d9287457daeedbe8ba96422a0454677269d717fd41a37b8967" Nov 25 20:00:57 crc kubenswrapper[4775]: I1125 20:00:57.850528 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:00:57 crc kubenswrapper[4775]: E1125 20:00:57.851750 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.172869 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29401681-rzgcc"] Nov 25 20:01:00 crc kubenswrapper[4775]: E1125 20:01:00.176099 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcd59af-0c47-47f0-84e6-743efe7a439b" containerName="extract-content" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.176328 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcd59af-0c47-47f0-84e6-743efe7a439b" containerName="extract-content" Nov 25 20:01:00 crc kubenswrapper[4775]: E1125 20:01:00.176545 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcd59af-0c47-47f0-84e6-743efe7a439b" containerName="extract-utilities" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.176747 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcd59af-0c47-47f0-84e6-743efe7a439b" containerName="extract-utilities" Nov 25 20:01:00 crc kubenswrapper[4775]: E1125 20:01:00.176949 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ced7361-4485-43e8-b942-4417fb168b44" containerName="collect-profiles" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.177115 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ced7361-4485-43e8-b942-4417fb168b44" containerName="collect-profiles" Nov 25 20:01:00 crc kubenswrapper[4775]: E1125 20:01:00.177304 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcd59af-0c47-47f0-84e6-743efe7a439b" containerName="registry-server" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.177451 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcd59af-0c47-47f0-84e6-743efe7a439b" containerName="registry-server" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.177985 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ced7361-4485-43e8-b942-4417fb168b44" containerName="collect-profiles" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.178165 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcd59af-0c47-47f0-84e6-743efe7a439b" containerName="registry-server" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.179340 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.188935 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401681-rzgcc"] Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.299875 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9854s\" (UniqueName: \"kubernetes.io/projected/050bd532-dae0-46ac-93f0-096c75d4c0a6-kube-api-access-9854s\") pod \"keystone-cron-29401681-rzgcc\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.299968 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-fernet-keys\") pod \"keystone-cron-29401681-rzgcc\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.300190 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-combined-ca-bundle\") pod \"keystone-cron-29401681-rzgcc\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.300254 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-config-data\") pod \"keystone-cron-29401681-rzgcc\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.401978 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-fernet-keys\") pod \"keystone-cron-29401681-rzgcc\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.402226 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-combined-ca-bundle\") pod \"keystone-cron-29401681-rzgcc\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.402279 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-config-data\") pod \"keystone-cron-29401681-rzgcc\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.402416 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9854s\" (UniqueName: \"kubernetes.io/projected/050bd532-dae0-46ac-93f0-096c75d4c0a6-kube-api-access-9854s\") pod \"keystone-cron-29401681-rzgcc\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.410062 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-combined-ca-bundle\") pod \"keystone-cron-29401681-rzgcc\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.413024 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-config-data\") pod \"keystone-cron-29401681-rzgcc\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.415012 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-fernet-keys\") pod \"keystone-cron-29401681-rzgcc\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.447217 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9854s\" (UniqueName: \"kubernetes.io/projected/050bd532-dae0-46ac-93f0-096c75d4c0a6-kube-api-access-9854s\") pod \"keystone-cron-29401681-rzgcc\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:00 crc kubenswrapper[4775]: I1125 20:01:00.526129 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:01 crc kubenswrapper[4775]: I1125 20:01:01.087343 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401681-rzgcc"] Nov 25 20:01:01 crc kubenswrapper[4775]: I1125 20:01:01.461308 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401681-rzgcc" event={"ID":"050bd532-dae0-46ac-93f0-096c75d4c0a6","Type":"ContainerStarted","Data":"7a445cf721ddaabc2a1a4f9170afa59409e84485f685cb68abbd21aa7696d44a"} Nov 25 20:01:01 crc kubenswrapper[4775]: I1125 20:01:01.461362 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401681-rzgcc" event={"ID":"050bd532-dae0-46ac-93f0-096c75d4c0a6","Type":"ContainerStarted","Data":"06154bfdbd474fadfae9019f186cbd1d4cf46b36481811742201a8b98e0bc82b"} Nov 25 20:01:01 crc kubenswrapper[4775]: I1125 20:01:01.488574 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29401681-rzgcc" podStartSLOduration=1.488554579 podStartE2EDuration="1.488554579s" podCreationTimestamp="2025-11-25 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:01:01.480517321 +0000 UTC m=+1643.396879697" watchObservedRunningTime="2025-11-25 20:01:01.488554579 +0000 UTC m=+1643.404916945" Nov 25 20:01:03 crc kubenswrapper[4775]: I1125 20:01:03.494415 4775 generic.go:334] "Generic (PLEG): container finished" podID="050bd532-dae0-46ac-93f0-096c75d4c0a6" containerID="7a445cf721ddaabc2a1a4f9170afa59409e84485f685cb68abbd21aa7696d44a" exitCode=0 Nov 25 20:01:03 crc kubenswrapper[4775]: I1125 20:01:03.494457 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401681-rzgcc" event={"ID":"050bd532-dae0-46ac-93f0-096c75d4c0a6","Type":"ContainerDied","Data":"7a445cf721ddaabc2a1a4f9170afa59409e84485f685cb68abbd21aa7696d44a"} Nov 25 20:01:04 crc kubenswrapper[4775]: I1125 20:01:04.836969 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:04 crc kubenswrapper[4775]: I1125 20:01:04.894613 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-config-data\") pod \"050bd532-dae0-46ac-93f0-096c75d4c0a6\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " Nov 25 20:01:04 crc kubenswrapper[4775]: I1125 20:01:04.894858 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9854s\" (UniqueName: \"kubernetes.io/projected/050bd532-dae0-46ac-93f0-096c75d4c0a6-kube-api-access-9854s\") pod \"050bd532-dae0-46ac-93f0-096c75d4c0a6\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " Nov 25 20:01:04 crc kubenswrapper[4775]: I1125 20:01:04.894952 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-fernet-keys\") pod \"050bd532-dae0-46ac-93f0-096c75d4c0a6\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " Nov 25 20:01:04 crc kubenswrapper[4775]: I1125 20:01:04.895079 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-combined-ca-bundle\") pod \"050bd532-dae0-46ac-93f0-096c75d4c0a6\" (UID: \"050bd532-dae0-46ac-93f0-096c75d4c0a6\") " Nov 25 20:01:04 crc kubenswrapper[4775]: I1125 20:01:04.905199 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050bd532-dae0-46ac-93f0-096c75d4c0a6-kube-api-access-9854s" (OuterVolumeSpecName: "kube-api-access-9854s") pod "050bd532-dae0-46ac-93f0-096c75d4c0a6" (UID: "050bd532-dae0-46ac-93f0-096c75d4c0a6"). InnerVolumeSpecName "kube-api-access-9854s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:01:04 crc kubenswrapper[4775]: I1125 20:01:04.923343 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "050bd532-dae0-46ac-93f0-096c75d4c0a6" (UID: "050bd532-dae0-46ac-93f0-096c75d4c0a6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:01:04 crc kubenswrapper[4775]: I1125 20:01:04.929018 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "050bd532-dae0-46ac-93f0-096c75d4c0a6" (UID: "050bd532-dae0-46ac-93f0-096c75d4c0a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:01:04 crc kubenswrapper[4775]: I1125 20:01:04.967892 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-config-data" (OuterVolumeSpecName: "config-data") pod "050bd532-dae0-46ac-93f0-096c75d4c0a6" (UID: "050bd532-dae0-46ac-93f0-096c75d4c0a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:01:04 crc kubenswrapper[4775]: I1125 20:01:04.997237 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9854s\" (UniqueName: \"kubernetes.io/projected/050bd532-dae0-46ac-93f0-096c75d4c0a6-kube-api-access-9854s\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:04 crc kubenswrapper[4775]: I1125 20:01:04.997274 4775 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:04 crc kubenswrapper[4775]: I1125 20:01:04.997286 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:04 crc kubenswrapper[4775]: I1125 20:01:04.997295 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050bd532-dae0-46ac-93f0-096c75d4c0a6-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:05 crc kubenswrapper[4775]: I1125 20:01:05.518957 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401681-rzgcc" event={"ID":"050bd532-dae0-46ac-93f0-096c75d4c0a6","Type":"ContainerDied","Data":"06154bfdbd474fadfae9019f186cbd1d4cf46b36481811742201a8b98e0bc82b"} Nov 25 20:01:05 crc kubenswrapper[4775]: I1125 20:01:05.519248 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06154bfdbd474fadfae9019f186cbd1d4cf46b36481811742201a8b98e0bc82b" Nov 25 20:01:05 crc kubenswrapper[4775]: I1125 20:01:05.519060 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401681-rzgcc" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.432903 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kwk5r"] Nov 25 20:01:07 crc kubenswrapper[4775]: E1125 20:01:07.433720 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050bd532-dae0-46ac-93f0-096c75d4c0a6" containerName="keystone-cron" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.433737 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="050bd532-dae0-46ac-93f0-096c75d4c0a6" containerName="keystone-cron" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.433965 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="050bd532-dae0-46ac-93f0-096c75d4c0a6" containerName="keystone-cron" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.435218 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.447790 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwk5r"] Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.552003 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-catalog-content\") pod \"certified-operators-kwk5r\" (UID: \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\") " pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.552077 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-utilities\") pod \"certified-operators-kwk5r\" (UID: \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\") " pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.552151 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgz6g\" (UniqueName: \"kubernetes.io/projected/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-kube-api-access-cgz6g\") pod \"certified-operators-kwk5r\" (UID: \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\") " pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.653637 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-catalog-content\") pod \"certified-operators-kwk5r\" (UID: \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\") " pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.653730 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-utilities\") pod \"certified-operators-kwk5r\" (UID: \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\") " pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.653781 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgz6g\" (UniqueName: \"kubernetes.io/projected/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-kube-api-access-cgz6g\") pod \"certified-operators-kwk5r\" (UID: \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\") " pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.654213 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-catalog-content\") pod \"certified-operators-kwk5r\" (UID: \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\") " pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.654293 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-utilities\") pod \"certified-operators-kwk5r\" (UID: \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\") " pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.675777 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgz6g\" (UniqueName: \"kubernetes.io/projected/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-kube-api-access-cgz6g\") pod \"certified-operators-kwk5r\" (UID: \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\") " pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:07 crc kubenswrapper[4775]: I1125 20:01:07.768392 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:08 crc kubenswrapper[4775]: I1125 20:01:08.277789 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwk5r"] Nov 25 20:01:08 crc kubenswrapper[4775]: I1125 20:01:08.555736 4775 generic.go:334] "Generic (PLEG): container finished" podID="2d6d4b84-6ac5-474b-9578-623d6f96a1f1" containerID="4af881b4994bffbc68265cbd8dca8233f63c1f4d88d5eb0e64498d78f172d855" exitCode=0 Nov 25 20:01:08 crc kubenswrapper[4775]: I1125 20:01:08.555796 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwk5r" event={"ID":"2d6d4b84-6ac5-474b-9578-623d6f96a1f1","Type":"ContainerDied","Data":"4af881b4994bffbc68265cbd8dca8233f63c1f4d88d5eb0e64498d78f172d855"} Nov 25 20:01:08 crc kubenswrapper[4775]: I1125 20:01:08.555988 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwk5r" event={"ID":"2d6d4b84-6ac5-474b-9578-623d6f96a1f1","Type":"ContainerStarted","Data":"13a33fd18eb0d87dcde59db46d586a435d2a5515c3a12479016d0df5ae33c301"} Nov 25 20:01:08 crc kubenswrapper[4775]: I1125 20:01:08.859199 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:01:08 crc kubenswrapper[4775]: E1125 20:01:08.859630 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:01:09 crc kubenswrapper[4775]: I1125 20:01:09.570102 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwk5r" event={"ID":"2d6d4b84-6ac5-474b-9578-623d6f96a1f1","Type":"ContainerStarted","Data":"bd4082a73c860734dce0fcb1cd62584d75e92704fae3f7ea82672798e0ebeb7a"} Nov 25 20:01:10 crc kubenswrapper[4775]: I1125 20:01:10.587484 4775 generic.go:334] "Generic (PLEG): container finished" podID="2d6d4b84-6ac5-474b-9578-623d6f96a1f1" containerID="bd4082a73c860734dce0fcb1cd62584d75e92704fae3f7ea82672798e0ebeb7a" exitCode=0 Nov 25 20:01:10 crc kubenswrapper[4775]: I1125 20:01:10.587641 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwk5r" event={"ID":"2d6d4b84-6ac5-474b-9578-623d6f96a1f1","Type":"ContainerDied","Data":"bd4082a73c860734dce0fcb1cd62584d75e92704fae3f7ea82672798e0ebeb7a"} Nov 25 20:01:10 crc kubenswrapper[4775]: I1125 20:01:10.837164 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2sjxw"] Nov 25 20:01:10 crc kubenswrapper[4775]: I1125 20:01:10.840761 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:10 crc kubenswrapper[4775]: I1125 20:01:10.870552 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sjxw"] Nov 25 20:01:10 crc kubenswrapper[4775]: I1125 20:01:10.932403 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bmc8\" (UniqueName: \"kubernetes.io/projected/607e0dd4-3bff-4a87-b657-af0422d82d19-kube-api-access-4bmc8\") pod \"redhat-marketplace-2sjxw\" (UID: \"607e0dd4-3bff-4a87-b657-af0422d82d19\") " pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:10 crc kubenswrapper[4775]: I1125 20:01:10.932600 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e0dd4-3bff-4a87-b657-af0422d82d19-catalog-content\") pod \"redhat-marketplace-2sjxw\" (UID: \"607e0dd4-3bff-4a87-b657-af0422d82d19\") " pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:10 crc kubenswrapper[4775]: I1125 20:01:10.932822 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e0dd4-3bff-4a87-b657-af0422d82d19-utilities\") pod \"redhat-marketplace-2sjxw\" (UID: \"607e0dd4-3bff-4a87-b657-af0422d82d19\") " pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:11 crc kubenswrapper[4775]: I1125 20:01:11.034689 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bmc8\" (UniqueName: \"kubernetes.io/projected/607e0dd4-3bff-4a87-b657-af0422d82d19-kube-api-access-4bmc8\") pod \"redhat-marketplace-2sjxw\" (UID: \"607e0dd4-3bff-4a87-b657-af0422d82d19\") " pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:11 crc kubenswrapper[4775]: I1125 20:01:11.034756 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e0dd4-3bff-4a87-b657-af0422d82d19-catalog-content\") pod \"redhat-marketplace-2sjxw\" (UID: \"607e0dd4-3bff-4a87-b657-af0422d82d19\") " pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:11 crc kubenswrapper[4775]: I1125 20:01:11.034825 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e0dd4-3bff-4a87-b657-af0422d82d19-utilities\") pod \"redhat-marketplace-2sjxw\" (UID: \"607e0dd4-3bff-4a87-b657-af0422d82d19\") " pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:11 crc kubenswrapper[4775]: I1125 20:01:11.035230 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e0dd4-3bff-4a87-b657-af0422d82d19-utilities\") pod \"redhat-marketplace-2sjxw\" (UID: \"607e0dd4-3bff-4a87-b657-af0422d82d19\") " pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:11 crc kubenswrapper[4775]: I1125 20:01:11.035694 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e0dd4-3bff-4a87-b657-af0422d82d19-catalog-content\") pod \"redhat-marketplace-2sjxw\" (UID: \"607e0dd4-3bff-4a87-b657-af0422d82d19\") " pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:11 crc kubenswrapper[4775]: I1125 20:01:11.057129 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bmc8\" (UniqueName: \"kubernetes.io/projected/607e0dd4-3bff-4a87-b657-af0422d82d19-kube-api-access-4bmc8\") pod \"redhat-marketplace-2sjxw\" (UID: \"607e0dd4-3bff-4a87-b657-af0422d82d19\") " pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:11 crc kubenswrapper[4775]: I1125 20:01:11.178089 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:11 crc kubenswrapper[4775]: I1125 20:01:11.602880 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwk5r" event={"ID":"2d6d4b84-6ac5-474b-9578-623d6f96a1f1","Type":"ContainerStarted","Data":"2601e1e5b7daca339d936f0ff680a4b032d444cbd6925fddf19564936333b7dc"} Nov 25 20:01:11 crc kubenswrapper[4775]: I1125 20:01:11.626006 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kwk5r" podStartSLOduration=2.186725356 podStartE2EDuration="4.625988908s" podCreationTimestamp="2025-11-25 20:01:07 +0000 UTC" firstStartedPulling="2025-11-25 20:01:08.557600496 +0000 UTC m=+1650.473962862" lastFinishedPulling="2025-11-25 20:01:10.996864028 +0000 UTC m=+1652.913226414" observedRunningTime="2025-11-25 20:01:11.617927699 +0000 UTC m=+1653.534290065" watchObservedRunningTime="2025-11-25 20:01:11.625988908 +0000 UTC m=+1653.542351274" Nov 25 20:01:11 crc kubenswrapper[4775]: I1125 20:01:11.665133 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sjxw"] Nov 25 20:01:12 crc kubenswrapper[4775]: I1125 20:01:12.619751 4775 generic.go:334] "Generic (PLEG): container finished" podID="607e0dd4-3bff-4a87-b657-af0422d82d19" containerID="e5fd3c096f737de31b670df524f2611b15611c8a3226598e0eb59c12d62b3f75" exitCode=0 Nov 25 20:01:12 crc kubenswrapper[4775]: I1125 20:01:12.619872 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sjxw" event={"ID":"607e0dd4-3bff-4a87-b657-af0422d82d19","Type":"ContainerDied","Data":"e5fd3c096f737de31b670df524f2611b15611c8a3226598e0eb59c12d62b3f75"} Nov 25 20:01:12 crc kubenswrapper[4775]: I1125 20:01:12.620277 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sjxw" event={"ID":"607e0dd4-3bff-4a87-b657-af0422d82d19","Type":"ContainerStarted","Data":"62ccc032d306156c7fa420d257d17172ad50e4f9049653f375570f3f0356f87f"} Nov 25 20:01:13 crc kubenswrapper[4775]: I1125 20:01:13.635874 4775 generic.go:334] "Generic (PLEG): container finished" podID="607e0dd4-3bff-4a87-b657-af0422d82d19" containerID="0fde8517152fa7a146ae6dcef93412b9df9adde10c631e81ebb7bcd8ca2c89b2" exitCode=0 Nov 25 20:01:13 crc kubenswrapper[4775]: I1125 20:01:13.635934 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sjxw" event={"ID":"607e0dd4-3bff-4a87-b657-af0422d82d19","Type":"ContainerDied","Data":"0fde8517152fa7a146ae6dcef93412b9df9adde10c631e81ebb7bcd8ca2c89b2"} Nov 25 20:01:14 crc kubenswrapper[4775]: I1125 20:01:14.648108 4775 generic.go:334] "Generic (PLEG): container finished" podID="695faa37-300a-4e2a-b516-a5053d5663dc" containerID="a5492b7d3899805941f7086d746b8e0490a39870dd419db30a7dd3eb0373c9d3" exitCode=0 Nov 25 20:01:14 crc kubenswrapper[4775]: I1125 20:01:14.648265 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" event={"ID":"695faa37-300a-4e2a-b516-a5053d5663dc","Type":"ContainerDied","Data":"a5492b7d3899805941f7086d746b8e0490a39870dd419db30a7dd3eb0373c9d3"} Nov 25 20:01:14 crc kubenswrapper[4775]: I1125 20:01:14.652894 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sjxw" event={"ID":"607e0dd4-3bff-4a87-b657-af0422d82d19","Type":"ContainerStarted","Data":"7e0ba3f5ba4f6b4b469d8e35477575f7d2b2077e385394882ff15ae4df70f1dc"} Nov 25 20:01:14 crc kubenswrapper[4775]: I1125 20:01:14.685220 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2sjxw" podStartSLOduration=3.166819462 podStartE2EDuration="4.685196739s" podCreationTimestamp="2025-11-25 20:01:10 +0000 UTC" firstStartedPulling="2025-11-25 20:01:12.623865741 +0000 UTC m=+1654.540228127" lastFinishedPulling="2025-11-25 20:01:14.142243018 +0000 UTC m=+1656.058605404" observedRunningTime="2025-11-25 20:01:14.684072679 +0000 UTC m=+1656.600435055" watchObservedRunningTime="2025-11-25 20:01:14.685196739 +0000 UTC m=+1656.601559115" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.147410 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.232039 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/695faa37-300a-4e2a-b516-a5053d5663dc-ssh-key\") pod \"695faa37-300a-4e2a-b516-a5053d5663dc\" (UID: \"695faa37-300a-4e2a-b516-a5053d5663dc\") " Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.232290 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/695faa37-300a-4e2a-b516-a5053d5663dc-inventory\") pod \"695faa37-300a-4e2a-b516-a5053d5663dc\" (UID: \"695faa37-300a-4e2a-b516-a5053d5663dc\") " Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.232518 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f4f2\" (UniqueName: \"kubernetes.io/projected/695faa37-300a-4e2a-b516-a5053d5663dc-kube-api-access-4f4f2\") pod \"695faa37-300a-4e2a-b516-a5053d5663dc\" (UID: \"695faa37-300a-4e2a-b516-a5053d5663dc\") " Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.241776 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695faa37-300a-4e2a-b516-a5053d5663dc-kube-api-access-4f4f2" (OuterVolumeSpecName: "kube-api-access-4f4f2") pod "695faa37-300a-4e2a-b516-a5053d5663dc" (UID: "695faa37-300a-4e2a-b516-a5053d5663dc"). InnerVolumeSpecName "kube-api-access-4f4f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.273495 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695faa37-300a-4e2a-b516-a5053d5663dc-inventory" (OuterVolumeSpecName: "inventory") pod "695faa37-300a-4e2a-b516-a5053d5663dc" (UID: "695faa37-300a-4e2a-b516-a5053d5663dc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.273593 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695faa37-300a-4e2a-b516-a5053d5663dc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "695faa37-300a-4e2a-b516-a5053d5663dc" (UID: "695faa37-300a-4e2a-b516-a5053d5663dc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.335145 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/695faa37-300a-4e2a-b516-a5053d5663dc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.335174 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/695faa37-300a-4e2a-b516-a5053d5663dc-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.335184 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f4f2\" (UniqueName: \"kubernetes.io/projected/695faa37-300a-4e2a-b516-a5053d5663dc-kube-api-access-4f4f2\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.677323 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" event={"ID":"695faa37-300a-4e2a-b516-a5053d5663dc","Type":"ContainerDied","Data":"bf58911c017c1357618ff5bbc13fd7a6f71234d93d8c3e0da4cc3017da57c18e"} Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.677400 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf58911c017c1357618ff5bbc13fd7a6f71234d93d8c3e0da4cc3017da57c18e" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.677427 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.805060 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7"] Nov 25 20:01:16 crc kubenswrapper[4775]: E1125 20:01:16.810005 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695faa37-300a-4e2a-b516-a5053d5663dc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.810119 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="695faa37-300a-4e2a-b516-a5053d5663dc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.812080 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="695faa37-300a-4e2a-b516-a5053d5663dc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.814509 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.820318 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.820540 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.820659 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.821908 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.843674 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7"] Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.844632 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7\" (UID: \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.844704 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l5nq\" (UniqueName: \"kubernetes.io/projected/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-kube-api-access-5l5nq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7\" (UID: \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.844866 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7\" (UID: \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.945840 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7\" (UID: \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.945977 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7\" (UID: \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.946005 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l5nq\" (UniqueName: \"kubernetes.io/projected/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-kube-api-access-5l5nq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7\" (UID: \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.952926 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7\" (UID: \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.954781 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7\" (UID: \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" Nov 25 20:01:16 crc kubenswrapper[4775]: I1125 20:01:16.976945 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l5nq\" (UniqueName: \"kubernetes.io/projected/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-kube-api-access-5l5nq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7\" (UID: \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" Nov 25 20:01:17 crc kubenswrapper[4775]: I1125 20:01:17.143069 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" Nov 25 20:01:17 crc kubenswrapper[4775]: I1125 20:01:17.548390 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7"] Nov 25 20:01:17 crc kubenswrapper[4775]: W1125 20:01:17.557206 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5e07a3a_fdc4_4fd1_9039_181cc3ae0464.slice/crio-3e538a63a283efab5236d87fb707d04cc4be47924a466ad6c1acb7e6b16417f6 WatchSource:0}: Error finding container 3e538a63a283efab5236d87fb707d04cc4be47924a466ad6c1acb7e6b16417f6: Status 404 returned error can't find the container with id 3e538a63a283efab5236d87fb707d04cc4be47924a466ad6c1acb7e6b16417f6 Nov 25 20:01:17 crc kubenswrapper[4775]: I1125 20:01:17.562942 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:01:17 crc kubenswrapper[4775]: I1125 20:01:17.689422 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" event={"ID":"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464","Type":"ContainerStarted","Data":"3e538a63a283efab5236d87fb707d04cc4be47924a466ad6c1acb7e6b16417f6"} Nov 25 20:01:17 crc kubenswrapper[4775]: I1125 20:01:17.769065 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:17 crc kubenswrapper[4775]: I1125 20:01:17.769418 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:17 crc kubenswrapper[4775]: I1125 20:01:17.848003 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:18 crc kubenswrapper[4775]: I1125 20:01:18.699203 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" event={"ID":"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464","Type":"ContainerStarted","Data":"74bd276437fa9e3870f16c4b0c9f533e89d3c06ffcb34fbc21df39187915862a"} Nov 25 20:01:18 crc kubenswrapper[4775]: I1125 20:01:18.719424 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" podStartSLOduration=2.2644680409999998 podStartE2EDuration="2.719409005s" podCreationTimestamp="2025-11-25 20:01:16 +0000 UTC" firstStartedPulling="2025-11-25 20:01:17.562330522 +0000 UTC m=+1659.478692928" lastFinishedPulling="2025-11-25 20:01:18.017271506 +0000 UTC m=+1659.933633892" observedRunningTime="2025-11-25 20:01:18.716677481 +0000 UTC m=+1660.633039847" watchObservedRunningTime="2025-11-25 20:01:18.719409005 +0000 UTC m=+1660.635771371" Nov 25 20:01:18 crc kubenswrapper[4775]: I1125 20:01:18.798638 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:18 crc kubenswrapper[4775]: I1125 20:01:18.855561 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwk5r"] Nov 25 20:01:20 crc kubenswrapper[4775]: I1125 20:01:20.721071 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kwk5r" podUID="2d6d4b84-6ac5-474b-9578-623d6f96a1f1" containerName="registry-server" containerID="cri-o://2601e1e5b7daca339d936f0ff680a4b032d444cbd6925fddf19564936333b7dc" gracePeriod=2 Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.179330 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.179864 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.242887 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.301075 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.350558 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgz6g\" (UniqueName: \"kubernetes.io/projected/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-kube-api-access-cgz6g\") pod \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\" (UID: \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\") " Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.350726 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-catalog-content\") pod \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\" (UID: \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\") " Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.350912 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-utilities\") pod \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\" (UID: \"2d6d4b84-6ac5-474b-9578-623d6f96a1f1\") " Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.351815 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-utilities" (OuterVolumeSpecName: "utilities") pod "2d6d4b84-6ac5-474b-9578-623d6f96a1f1" (UID: "2d6d4b84-6ac5-474b-9578-623d6f96a1f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.369710 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-kube-api-access-cgz6g" (OuterVolumeSpecName: "kube-api-access-cgz6g") pod "2d6d4b84-6ac5-474b-9578-623d6f96a1f1" (UID: "2d6d4b84-6ac5-474b-9578-623d6f96a1f1"). InnerVolumeSpecName "kube-api-access-cgz6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.392095 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d6d4b84-6ac5-474b-9578-623d6f96a1f1" (UID: "2d6d4b84-6ac5-474b-9578-623d6f96a1f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.452182 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgz6g\" (UniqueName: \"kubernetes.io/projected/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-kube-api-access-cgz6g\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.452220 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.452233 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d6d4b84-6ac5-474b-9578-623d6f96a1f1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.738135 4775 generic.go:334] "Generic (PLEG): container finished" podID="2d6d4b84-6ac5-474b-9578-623d6f96a1f1" containerID="2601e1e5b7daca339d936f0ff680a4b032d444cbd6925fddf19564936333b7dc" exitCode=0 Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.738228 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwk5r" event={"ID":"2d6d4b84-6ac5-474b-9578-623d6f96a1f1","Type":"ContainerDied","Data":"2601e1e5b7daca339d936f0ff680a4b032d444cbd6925fddf19564936333b7dc"} Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.738641 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwk5r" event={"ID":"2d6d4b84-6ac5-474b-9578-623d6f96a1f1","Type":"ContainerDied","Data":"13a33fd18eb0d87dcde59db46d586a435d2a5515c3a12479016d0df5ae33c301"} Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.738733 4775 scope.go:117] "RemoveContainer" containerID="2601e1e5b7daca339d936f0ff680a4b032d444cbd6925fddf19564936333b7dc" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.738256 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwk5r" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.780865 4775 scope.go:117] "RemoveContainer" containerID="bd4082a73c860734dce0fcb1cd62584d75e92704fae3f7ea82672798e0ebeb7a" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.808521 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwk5r"] Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.823821 4775 scope.go:117] "RemoveContainer" containerID="4af881b4994bffbc68265cbd8dca8233f63c1f4d88d5eb0e64498d78f172d855" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.825608 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kwk5r"] Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.834195 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.847117 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:01:21 crc kubenswrapper[4775]: E1125 20:01:21.847412 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.902434 4775 scope.go:117] "RemoveContainer" containerID="2601e1e5b7daca339d936f0ff680a4b032d444cbd6925fddf19564936333b7dc" Nov 25 20:01:21 crc kubenswrapper[4775]: E1125 20:01:21.902836 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2601e1e5b7daca339d936f0ff680a4b032d444cbd6925fddf19564936333b7dc\": container with ID starting with 2601e1e5b7daca339d936f0ff680a4b032d444cbd6925fddf19564936333b7dc not found: ID does not exist" containerID="2601e1e5b7daca339d936f0ff680a4b032d444cbd6925fddf19564936333b7dc" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.902875 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2601e1e5b7daca339d936f0ff680a4b032d444cbd6925fddf19564936333b7dc"} err="failed to get container status \"2601e1e5b7daca339d936f0ff680a4b032d444cbd6925fddf19564936333b7dc\": rpc error: code = NotFound desc = could not find container \"2601e1e5b7daca339d936f0ff680a4b032d444cbd6925fddf19564936333b7dc\": container with ID starting with 2601e1e5b7daca339d936f0ff680a4b032d444cbd6925fddf19564936333b7dc not found: ID does not exist" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.902901 4775 scope.go:117] "RemoveContainer" containerID="bd4082a73c860734dce0fcb1cd62584d75e92704fae3f7ea82672798e0ebeb7a" Nov 25 20:01:21 crc kubenswrapper[4775]: E1125 20:01:21.903459 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd4082a73c860734dce0fcb1cd62584d75e92704fae3f7ea82672798e0ebeb7a\": container with ID starting with bd4082a73c860734dce0fcb1cd62584d75e92704fae3f7ea82672798e0ebeb7a not found: ID does not exist" containerID="bd4082a73c860734dce0fcb1cd62584d75e92704fae3f7ea82672798e0ebeb7a" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.903816 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4082a73c860734dce0fcb1cd62584d75e92704fae3f7ea82672798e0ebeb7a"} err="failed to get container status \"bd4082a73c860734dce0fcb1cd62584d75e92704fae3f7ea82672798e0ebeb7a\": rpc error: code = NotFound desc = could not find container \"bd4082a73c860734dce0fcb1cd62584d75e92704fae3f7ea82672798e0ebeb7a\": container with ID starting with bd4082a73c860734dce0fcb1cd62584d75e92704fae3f7ea82672798e0ebeb7a not found: ID does not exist" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.904003 4775 scope.go:117] "RemoveContainer" containerID="4af881b4994bffbc68265cbd8dca8233f63c1f4d88d5eb0e64498d78f172d855" Nov 25 20:01:21 crc kubenswrapper[4775]: E1125 20:01:21.904605 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4af881b4994bffbc68265cbd8dca8233f63c1f4d88d5eb0e64498d78f172d855\": container with ID starting with 4af881b4994bffbc68265cbd8dca8233f63c1f4d88d5eb0e64498d78f172d855 not found: ID does not exist" containerID="4af881b4994bffbc68265cbd8dca8233f63c1f4d88d5eb0e64498d78f172d855" Nov 25 20:01:21 crc kubenswrapper[4775]: I1125 20:01:21.904761 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af881b4994bffbc68265cbd8dca8233f63c1f4d88d5eb0e64498d78f172d855"} err="failed to get container status \"4af881b4994bffbc68265cbd8dca8233f63c1f4d88d5eb0e64498d78f172d855\": rpc error: code = NotFound desc = could not find container \"4af881b4994bffbc68265cbd8dca8233f63c1f4d88d5eb0e64498d78f172d855\": container with ID starting with 4af881b4994bffbc68265cbd8dca8233f63c1f4d88d5eb0e64498d78f172d855 not found: ID does not exist" Nov 25 20:01:22 crc kubenswrapper[4775]: I1125 20:01:22.065057 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7828-account-create-update-r8568"] Nov 25 20:01:22 crc kubenswrapper[4775]: I1125 20:01:22.081177 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8s2fd"] Nov 25 20:01:22 crc kubenswrapper[4775]: I1125 20:01:22.091736 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8fab-account-create-update-l6j8t"] Nov 25 20:01:22 crc kubenswrapper[4775]: I1125 20:01:22.101885 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-797rb"] Nov 25 20:01:22 crc kubenswrapper[4775]: I1125 20:01:22.110619 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8s2fd"] Nov 25 20:01:22 crc kubenswrapper[4775]: I1125 20:01:22.116982 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-797rb"] Nov 25 20:01:22 crc kubenswrapper[4775]: I1125 20:01:22.123134 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8fab-account-create-update-l6j8t"] Nov 25 20:01:22 crc kubenswrapper[4775]: I1125 20:01:22.129713 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7828-account-create-update-r8568"] Nov 25 20:01:22 crc kubenswrapper[4775]: I1125 20:01:22.865791 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6d4b84-6ac5-474b-9578-623d6f96a1f1" path="/var/lib/kubelet/pods/2d6d4b84-6ac5-474b-9578-623d6f96a1f1/volumes" Nov 25 20:01:22 crc kubenswrapper[4775]: I1125 20:01:22.867778 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316abeb5-c5ff-4da2-b056-0f704b710dc7" path="/var/lib/kubelet/pods/316abeb5-c5ff-4da2-b056-0f704b710dc7/volumes" Nov 25 20:01:22 crc kubenswrapper[4775]: I1125 20:01:22.869080 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="327bd993-33a8-41c1-81e5-a3e8d92ff438" path="/var/lib/kubelet/pods/327bd993-33a8-41c1-81e5-a3e8d92ff438/volumes" Nov 25 20:01:22 crc kubenswrapper[4775]: I1125 20:01:22.871176 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a5c5668-58e5-4380-a864-1be4be778b9e" path="/var/lib/kubelet/pods/4a5c5668-58e5-4380-a864-1be4be778b9e/volumes" Nov 25 20:01:22 crc kubenswrapper[4775]: I1125 20:01:22.872451 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c687c95-b35b-44f3-8db5-a32e1462e604" path="/var/lib/kubelet/pods/7c687c95-b35b-44f3-8db5-a32e1462e604/volumes" Nov 25 20:01:23 crc kubenswrapper[4775]: I1125 20:01:23.046438 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-ft6j6"] Nov 25 20:01:23 crc kubenswrapper[4775]: I1125 20:01:23.060630 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-22a0-account-create-update-fxms6"] Nov 25 20:01:23 crc kubenswrapper[4775]: I1125 20:01:23.074943 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-ft6j6"] Nov 25 20:01:23 crc kubenswrapper[4775]: I1125 20:01:23.084472 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-22a0-account-create-update-fxms6"] Nov 25 20:01:23 crc kubenswrapper[4775]: I1125 20:01:23.770980 4775 generic.go:334] "Generic (PLEG): container finished" podID="b5e07a3a-fdc4-4fd1-9039-181cc3ae0464" containerID="74bd276437fa9e3870f16c4b0c9f533e89d3c06ffcb34fbc21df39187915862a" exitCode=0 Nov 25 20:01:23 crc kubenswrapper[4775]: I1125 20:01:23.771040 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" event={"ID":"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464","Type":"ContainerDied","Data":"74bd276437fa9e3870f16c4b0c9f533e89d3c06ffcb34fbc21df39187915862a"} Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.100997 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sjxw"] Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.101362 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2sjxw" podUID="607e0dd4-3bff-4a87-b657-af0422d82d19" containerName="registry-server" containerID="cri-o://7e0ba3f5ba4f6b4b469d8e35477575f7d2b2077e385394882ff15ae4df70f1dc" gracePeriod=2 Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.704432 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.786921 4775 generic.go:334] "Generic (PLEG): container finished" podID="607e0dd4-3bff-4a87-b657-af0422d82d19" containerID="7e0ba3f5ba4f6b4b469d8e35477575f7d2b2077e385394882ff15ae4df70f1dc" exitCode=0 Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.786974 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2sjxw" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.787067 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sjxw" event={"ID":"607e0dd4-3bff-4a87-b657-af0422d82d19","Type":"ContainerDied","Data":"7e0ba3f5ba4f6b4b469d8e35477575f7d2b2077e385394882ff15ae4df70f1dc"} Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.787140 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sjxw" event={"ID":"607e0dd4-3bff-4a87-b657-af0422d82d19","Type":"ContainerDied","Data":"62ccc032d306156c7fa420d257d17172ad50e4f9049653f375570f3f0356f87f"} Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.787179 4775 scope.go:117] "RemoveContainer" containerID="7e0ba3f5ba4f6b4b469d8e35477575f7d2b2077e385394882ff15ae4df70f1dc" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.826163 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e0dd4-3bff-4a87-b657-af0422d82d19-utilities\") pod \"607e0dd4-3bff-4a87-b657-af0422d82d19\" (UID: \"607e0dd4-3bff-4a87-b657-af0422d82d19\") " Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.826258 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bmc8\" (UniqueName: \"kubernetes.io/projected/607e0dd4-3bff-4a87-b657-af0422d82d19-kube-api-access-4bmc8\") pod \"607e0dd4-3bff-4a87-b657-af0422d82d19\" (UID: \"607e0dd4-3bff-4a87-b657-af0422d82d19\") " Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.826325 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e0dd4-3bff-4a87-b657-af0422d82d19-catalog-content\") pod \"607e0dd4-3bff-4a87-b657-af0422d82d19\" (UID: \"607e0dd4-3bff-4a87-b657-af0422d82d19\") " Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.827568 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/607e0dd4-3bff-4a87-b657-af0422d82d19-utilities" (OuterVolumeSpecName: "utilities") pod "607e0dd4-3bff-4a87-b657-af0422d82d19" (UID: "607e0dd4-3bff-4a87-b657-af0422d82d19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.835077 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/607e0dd4-3bff-4a87-b657-af0422d82d19-kube-api-access-4bmc8" (OuterVolumeSpecName: "kube-api-access-4bmc8") pod "607e0dd4-3bff-4a87-b657-af0422d82d19" (UID: "607e0dd4-3bff-4a87-b657-af0422d82d19"). InnerVolumeSpecName "kube-api-access-4bmc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.845173 4775 scope.go:117] "RemoveContainer" containerID="0fde8517152fa7a146ae6dcef93412b9df9adde10c631e81ebb7bcd8ca2c89b2" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.850607 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/607e0dd4-3bff-4a87-b657-af0422d82d19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "607e0dd4-3bff-4a87-b657-af0422d82d19" (UID: "607e0dd4-3bff-4a87-b657-af0422d82d19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.862540 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b42817-fba4-49ac-9215-3efbb04f0ef9" path="/var/lib/kubelet/pods/55b42817-fba4-49ac-9215-3efbb04f0ef9/volumes" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.863431 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9e08f8-216a-4f56-a8ac-e7f53147c636" path="/var/lib/kubelet/pods/be9e08f8-216a-4f56-a8ac-e7f53147c636/volumes" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.904317 4775 scope.go:117] "RemoveContainer" containerID="e5fd3c096f737de31b670df524f2611b15611c8a3226598e0eb59c12d62b3f75" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.930278 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e0dd4-3bff-4a87-b657-af0422d82d19-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.930318 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bmc8\" (UniqueName: \"kubernetes.io/projected/607e0dd4-3bff-4a87-b657-af0422d82d19-kube-api-access-4bmc8\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.930333 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e0dd4-3bff-4a87-b657-af0422d82d19-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.947129 4775 scope.go:117] "RemoveContainer" containerID="7e0ba3f5ba4f6b4b469d8e35477575f7d2b2077e385394882ff15ae4df70f1dc" Nov 25 20:01:24 crc kubenswrapper[4775]: E1125 20:01:24.947694 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e0ba3f5ba4f6b4b469d8e35477575f7d2b2077e385394882ff15ae4df70f1dc\": container with ID starting with 7e0ba3f5ba4f6b4b469d8e35477575f7d2b2077e385394882ff15ae4df70f1dc not found: ID does not exist" containerID="7e0ba3f5ba4f6b4b469d8e35477575f7d2b2077e385394882ff15ae4df70f1dc" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.947745 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e0ba3f5ba4f6b4b469d8e35477575f7d2b2077e385394882ff15ae4df70f1dc"} err="failed to get container status \"7e0ba3f5ba4f6b4b469d8e35477575f7d2b2077e385394882ff15ae4df70f1dc\": rpc error: code = NotFound desc = could not find container \"7e0ba3f5ba4f6b4b469d8e35477575f7d2b2077e385394882ff15ae4df70f1dc\": container with ID starting with 7e0ba3f5ba4f6b4b469d8e35477575f7d2b2077e385394882ff15ae4df70f1dc not found: ID does not exist" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.947778 4775 scope.go:117] "RemoveContainer" containerID="0fde8517152fa7a146ae6dcef93412b9df9adde10c631e81ebb7bcd8ca2c89b2" Nov 25 20:01:24 crc kubenswrapper[4775]: E1125 20:01:24.948195 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fde8517152fa7a146ae6dcef93412b9df9adde10c631e81ebb7bcd8ca2c89b2\": container with ID starting with 0fde8517152fa7a146ae6dcef93412b9df9adde10c631e81ebb7bcd8ca2c89b2 not found: ID does not exist" containerID="0fde8517152fa7a146ae6dcef93412b9df9adde10c631e81ebb7bcd8ca2c89b2" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.948233 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fde8517152fa7a146ae6dcef93412b9df9adde10c631e81ebb7bcd8ca2c89b2"} err="failed to get container status \"0fde8517152fa7a146ae6dcef93412b9df9adde10c631e81ebb7bcd8ca2c89b2\": rpc error: code = NotFound desc = could not find container \"0fde8517152fa7a146ae6dcef93412b9df9adde10c631e81ebb7bcd8ca2c89b2\": container with ID starting with 0fde8517152fa7a146ae6dcef93412b9df9adde10c631e81ebb7bcd8ca2c89b2 not found: ID does not exist" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.948261 4775 scope.go:117] "RemoveContainer" containerID="e5fd3c096f737de31b670df524f2611b15611c8a3226598e0eb59c12d62b3f75" Nov 25 20:01:24 crc kubenswrapper[4775]: E1125 20:01:24.948756 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5fd3c096f737de31b670df524f2611b15611c8a3226598e0eb59c12d62b3f75\": container with ID starting with e5fd3c096f737de31b670df524f2611b15611c8a3226598e0eb59c12d62b3f75 not found: ID does not exist" containerID="e5fd3c096f737de31b670df524f2611b15611c8a3226598e0eb59c12d62b3f75" Nov 25 20:01:24 crc kubenswrapper[4775]: I1125 20:01:24.948795 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5fd3c096f737de31b670df524f2611b15611c8a3226598e0eb59c12d62b3f75"} err="failed to get container status \"e5fd3c096f737de31b670df524f2611b15611c8a3226598e0eb59c12d62b3f75\": rpc error: code = NotFound desc = could not find container \"e5fd3c096f737de31b670df524f2611b15611c8a3226598e0eb59c12d62b3f75\": container with ID starting with e5fd3c096f737de31b670df524f2611b15611c8a3226598e0eb59c12d62b3f75 not found: ID does not exist" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.114604 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sjxw"] Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.126856 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sjxw"] Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.134924 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.336633 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-inventory\") pod \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\" (UID: \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\") " Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.336764 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l5nq\" (UniqueName: \"kubernetes.io/projected/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-kube-api-access-5l5nq\") pod \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\" (UID: \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\") " Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.336951 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-ssh-key\") pod \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\" (UID: \"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464\") " Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.343679 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-kube-api-access-5l5nq" (OuterVolumeSpecName: "kube-api-access-5l5nq") pod "b5e07a3a-fdc4-4fd1-9039-181cc3ae0464" (UID: "b5e07a3a-fdc4-4fd1-9039-181cc3ae0464"). InnerVolumeSpecName "kube-api-access-5l5nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.382596 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b5e07a3a-fdc4-4fd1-9039-181cc3ae0464" (UID: "b5e07a3a-fdc4-4fd1-9039-181cc3ae0464"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.391514 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-inventory" (OuterVolumeSpecName: "inventory") pod "b5e07a3a-fdc4-4fd1-9039-181cc3ae0464" (UID: "b5e07a3a-fdc4-4fd1-9039-181cc3ae0464"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.439961 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.440020 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l5nq\" (UniqueName: \"kubernetes.io/projected/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-kube-api-access-5l5nq\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.440042 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.799937 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" event={"ID":"b5e07a3a-fdc4-4fd1-9039-181cc3ae0464","Type":"ContainerDied","Data":"3e538a63a283efab5236d87fb707d04cc4be47924a466ad6c1acb7e6b16417f6"} Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.799991 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e538a63a283efab5236d87fb707d04cc4be47924a466ad6c1acb7e6b16417f6" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.800016 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.883564 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws"] Nov 25 20:01:25 crc kubenswrapper[4775]: E1125 20:01:25.884007 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6d4b84-6ac5-474b-9578-623d6f96a1f1" containerName="registry-server" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.884028 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6d4b84-6ac5-474b-9578-623d6f96a1f1" containerName="registry-server" Nov 25 20:01:25 crc kubenswrapper[4775]: E1125 20:01:25.884055 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607e0dd4-3bff-4a87-b657-af0422d82d19" containerName="extract-content" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.884065 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="607e0dd4-3bff-4a87-b657-af0422d82d19" containerName="extract-content" Nov 25 20:01:25 crc kubenswrapper[4775]: E1125 20:01:25.884083 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607e0dd4-3bff-4a87-b657-af0422d82d19" containerName="registry-server" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.884091 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="607e0dd4-3bff-4a87-b657-af0422d82d19" containerName="registry-server" Nov 25 20:01:25 crc kubenswrapper[4775]: E1125 20:01:25.884112 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6d4b84-6ac5-474b-9578-623d6f96a1f1" containerName="extract-content" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.884120 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6d4b84-6ac5-474b-9578-623d6f96a1f1" containerName="extract-content" Nov 25 20:01:25 crc kubenswrapper[4775]: E1125 20:01:25.884131 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6d4b84-6ac5-474b-9578-623d6f96a1f1" containerName="extract-utilities" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.884139 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6d4b84-6ac5-474b-9578-623d6f96a1f1" containerName="extract-utilities" Nov 25 20:01:25 crc kubenswrapper[4775]: E1125 20:01:25.884154 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607e0dd4-3bff-4a87-b657-af0422d82d19" containerName="extract-utilities" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.884161 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="607e0dd4-3bff-4a87-b657-af0422d82d19" containerName="extract-utilities" Nov 25 20:01:25 crc kubenswrapper[4775]: E1125 20:01:25.884179 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e07a3a-fdc4-4fd1-9039-181cc3ae0464" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.884188 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e07a3a-fdc4-4fd1-9039-181cc3ae0464" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.885120 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d6d4b84-6ac5-474b-9578-623d6f96a1f1" containerName="registry-server" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.885243 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e07a3a-fdc4-4fd1-9039-181cc3ae0464" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.885321 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="607e0dd4-3bff-4a87-b657-af0422d82d19" containerName="registry-server" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.887044 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.895237 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.895740 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.895992 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.896349 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.923456 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws"] Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.951570 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/112aead4-4563-494f-a3a6-eb7facee43f3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nfbws\" (UID: \"112aead4-4563-494f-a3a6-eb7facee43f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.951721 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk8c5\" (UniqueName: \"kubernetes.io/projected/112aead4-4563-494f-a3a6-eb7facee43f3-kube-api-access-tk8c5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nfbws\" (UID: \"112aead4-4563-494f-a3a6-eb7facee43f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" Nov 25 20:01:25 crc kubenswrapper[4775]: I1125 20:01:25.951815 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/112aead4-4563-494f-a3a6-eb7facee43f3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nfbws\" (UID: \"112aead4-4563-494f-a3a6-eb7facee43f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" Nov 25 20:01:26 crc kubenswrapper[4775]: I1125 20:01:26.053184 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/112aead4-4563-494f-a3a6-eb7facee43f3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nfbws\" (UID: \"112aead4-4563-494f-a3a6-eb7facee43f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" Nov 25 20:01:26 crc kubenswrapper[4775]: I1125 20:01:26.053371 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk8c5\" (UniqueName: \"kubernetes.io/projected/112aead4-4563-494f-a3a6-eb7facee43f3-kube-api-access-tk8c5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nfbws\" (UID: \"112aead4-4563-494f-a3a6-eb7facee43f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" Nov 25 20:01:26 crc kubenswrapper[4775]: I1125 20:01:26.053492 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/112aead4-4563-494f-a3a6-eb7facee43f3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nfbws\" (UID: \"112aead4-4563-494f-a3a6-eb7facee43f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" Nov 25 20:01:26 crc kubenswrapper[4775]: I1125 20:01:26.062531 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/112aead4-4563-494f-a3a6-eb7facee43f3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nfbws\" (UID: \"112aead4-4563-494f-a3a6-eb7facee43f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" Nov 25 20:01:26 crc kubenswrapper[4775]: I1125 20:01:26.075220 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/112aead4-4563-494f-a3a6-eb7facee43f3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nfbws\" (UID: \"112aead4-4563-494f-a3a6-eb7facee43f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" Nov 25 20:01:26 crc kubenswrapper[4775]: I1125 20:01:26.088298 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk8c5\" (UniqueName: \"kubernetes.io/projected/112aead4-4563-494f-a3a6-eb7facee43f3-kube-api-access-tk8c5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nfbws\" (UID: \"112aead4-4563-494f-a3a6-eb7facee43f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" Nov 25 20:01:26 crc kubenswrapper[4775]: I1125 20:01:26.225874 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" Nov 25 20:01:26 crc kubenswrapper[4775]: I1125 20:01:26.869189 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="607e0dd4-3bff-4a87-b657-af0422d82d19" path="/var/lib/kubelet/pods/607e0dd4-3bff-4a87-b657-af0422d82d19/volumes" Nov 25 20:01:26 crc kubenswrapper[4775]: I1125 20:01:26.871508 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws"] Nov 25 20:01:27 crc kubenswrapper[4775]: I1125 20:01:27.825826 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" event={"ID":"112aead4-4563-494f-a3a6-eb7facee43f3","Type":"ContainerStarted","Data":"fd36e15664485fcc3f4745627cc0d8102b61e9506ae49a9f703b5a359281b7f9"} Nov 25 20:01:27 crc kubenswrapper[4775]: I1125 20:01:27.826181 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" event={"ID":"112aead4-4563-494f-a3a6-eb7facee43f3","Type":"ContainerStarted","Data":"20fc1f9f799c688aac7b4778ccd2ff50c9d76622684192532822f037f039bfa8"} Nov 25 20:01:27 crc kubenswrapper[4775]: I1125 20:01:27.865667 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" podStartSLOduration=2.395477435 podStartE2EDuration="2.865622271s" podCreationTimestamp="2025-11-25 20:01:25 +0000 UTC" firstStartedPulling="2025-11-25 20:01:26.859839202 +0000 UTC m=+1668.776201598" lastFinishedPulling="2025-11-25 20:01:27.329984038 +0000 UTC m=+1669.246346434" observedRunningTime="2025-11-25 20:01:27.852522496 +0000 UTC m=+1669.768884862" watchObservedRunningTime="2025-11-25 20:01:27.865622271 +0000 UTC m=+1669.781984667" Nov 25 20:01:33 crc kubenswrapper[4775]: I1125 20:01:33.847723 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:01:33 crc kubenswrapper[4775]: E1125 20:01:33.848699 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:01:44 crc kubenswrapper[4775]: I1125 20:01:44.079545 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-n669j"] Nov 25 20:01:44 crc kubenswrapper[4775]: I1125 20:01:44.098726 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-n669j"] Nov 25 20:01:44 crc kubenswrapper[4775]: I1125 20:01:44.866298 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbe5d55e-4118-479b-bd86-f3b8cdac21a8" path="/var/lib/kubelet/pods/cbe5d55e-4118-479b-bd86-f3b8cdac21a8/volumes" Nov 25 20:01:46 crc kubenswrapper[4775]: I1125 20:01:46.849888 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:01:46 crc kubenswrapper[4775]: E1125 20:01:46.850733 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:01:48 crc kubenswrapper[4775]: I1125 20:01:48.649143 4775 scope.go:117] "RemoveContainer" containerID="24254b492ee99ddad75b5b63d9136ba70d585a419c6b70d69d7732ae2ce33613" Nov 25 20:01:48 crc kubenswrapper[4775]: I1125 20:01:48.677764 4775 scope.go:117] "RemoveContainer" containerID="5a89ff352ff21b8ff2081247f805e55c92e0be9a124d510f8a53e6e910d4abb6" Nov 25 20:01:48 crc kubenswrapper[4775]: I1125 20:01:48.770959 4775 scope.go:117] "RemoveContainer" containerID="3b6df1bd0d69df7256523aa461978d9de3034cafa16585c1acf23d82019e0efa" Nov 25 20:01:48 crc kubenswrapper[4775]: I1125 20:01:48.808051 4775 scope.go:117] "RemoveContainer" containerID="f78da86ad06f55f67b274ca1d529925f0a9a607bb3d1fb8fafdcda6981d731be" Nov 25 20:01:48 crc kubenswrapper[4775]: I1125 20:01:48.857053 4775 scope.go:117] "RemoveContainer" containerID="9814e148cbe862425145b65aa626c1950d34122682edfb4d9d366fc157703291" Nov 25 20:01:48 crc kubenswrapper[4775]: I1125 20:01:48.902950 4775 scope.go:117] "RemoveContainer" containerID="b10a9993defbe80f82fc1c229471177497a1a1742f5cb7997732598ef7a8622a" Nov 25 20:01:48 crc kubenswrapper[4775]: I1125 20:01:48.939955 4775 scope.go:117] "RemoveContainer" containerID="ec3b18e5aa5a42fcc4d9ef9256002ec922ed720d2e06f54c196ffe8d06cba268" Nov 25 20:01:49 crc kubenswrapper[4775]: I1125 20:01:49.050018 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-q6hvd"] Nov 25 20:01:49 crc kubenswrapper[4775]: I1125 20:01:49.063023 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-q6hvd"] Nov 25 20:01:50 crc kubenswrapper[4775]: I1125 20:01:50.040570 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-841b-account-create-update-mrmc7"] Nov 25 20:01:50 crc kubenswrapper[4775]: I1125 20:01:50.054159 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ff43-account-create-update-8cktf"] Nov 25 20:01:50 crc kubenswrapper[4775]: I1125 20:01:50.063433 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-c8twt"] Nov 25 20:01:50 crc kubenswrapper[4775]: I1125 20:01:50.071446 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ht9vf"] Nov 25 20:01:50 crc kubenswrapper[4775]: I1125 20:01:50.079792 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-841b-account-create-update-mrmc7"] Nov 25 20:01:50 crc kubenswrapper[4775]: I1125 20:01:50.087181 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-c8twt"] Nov 25 20:01:50 crc kubenswrapper[4775]: I1125 20:01:50.093168 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ht9vf"] Nov 25 20:01:50 crc kubenswrapper[4775]: I1125 20:01:50.099138 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ff43-account-create-update-8cktf"] Nov 25 20:01:50 crc kubenswrapper[4775]: I1125 20:01:50.872561 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4" path="/var/lib/kubelet/pods/2ce05d1e-bcee-4125-b8bc-ad00b0b0eec4/volumes" Nov 25 20:01:50 crc kubenswrapper[4775]: I1125 20:01:50.874483 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34676258-9f7c-4e01-9e25-eacccc2f9a7f" path="/var/lib/kubelet/pods/34676258-9f7c-4e01-9e25-eacccc2f9a7f/volumes" Nov 25 20:01:50 crc kubenswrapper[4775]: I1125 20:01:50.875912 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8594b0a4-733b-4fb6-ad7c-1dc2c58a3908" path="/var/lib/kubelet/pods/8594b0a4-733b-4fb6-ad7c-1dc2c58a3908/volumes" Nov 25 20:01:50 crc kubenswrapper[4775]: I1125 20:01:50.877130 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86c93add-5616-45ac-b00b-5269f071ce55" path="/var/lib/kubelet/pods/86c93add-5616-45ac-b00b-5269f071ce55/volumes" Nov 25 20:01:50 crc kubenswrapper[4775]: I1125 20:01:50.879464 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d31220f5-79b6-47df-8501-3ce22c2fc213" path="/var/lib/kubelet/pods/d31220f5-79b6-47df-8501-3ce22c2fc213/volumes" Nov 25 20:01:51 crc kubenswrapper[4775]: I1125 20:01:51.040164 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-849c-account-create-update-bxwdn"] Nov 25 20:01:51 crc kubenswrapper[4775]: I1125 20:01:51.060361 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-849c-account-create-update-bxwdn"] Nov 25 20:01:52 crc kubenswrapper[4775]: I1125 20:01:52.865096 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e80ccc3b-e580-4303-a3d1-44c548db2e2e" path="/var/lib/kubelet/pods/e80ccc3b-e580-4303-a3d1-44c548db2e2e/volumes" Nov 25 20:01:59 crc kubenswrapper[4775]: I1125 20:01:59.042368 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-m2srp"] Nov 25 20:01:59 crc kubenswrapper[4775]: I1125 20:01:59.066190 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-m2srp"] Nov 25 20:01:59 crc kubenswrapper[4775]: I1125 20:01:59.847134 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:01:59 crc kubenswrapper[4775]: E1125 20:01:59.847899 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:02:00 crc kubenswrapper[4775]: I1125 20:02:00.862340 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2" path="/var/lib/kubelet/pods/ff2d1344-ac0b-4e7a-b2c2-380c0ce989f2/volumes" Nov 25 20:02:13 crc kubenswrapper[4775]: I1125 20:02:13.392238 4775 generic.go:334] "Generic (PLEG): container finished" podID="112aead4-4563-494f-a3a6-eb7facee43f3" containerID="fd36e15664485fcc3f4745627cc0d8102b61e9506ae49a9f703b5a359281b7f9" exitCode=0 Nov 25 20:02:13 crc kubenswrapper[4775]: I1125 20:02:13.392309 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" event={"ID":"112aead4-4563-494f-a3a6-eb7facee43f3","Type":"ContainerDied","Data":"fd36e15664485fcc3f4745627cc0d8102b61e9506ae49a9f703b5a359281b7f9"} Nov 25 20:02:13 crc kubenswrapper[4775]: I1125 20:02:13.847812 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:02:13 crc kubenswrapper[4775]: E1125 20:02:13.848205 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:02:14 crc kubenswrapper[4775]: I1125 20:02:14.916575 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.099067 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/112aead4-4563-494f-a3a6-eb7facee43f3-ssh-key\") pod \"112aead4-4563-494f-a3a6-eb7facee43f3\" (UID: \"112aead4-4563-494f-a3a6-eb7facee43f3\") " Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.099447 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk8c5\" (UniqueName: \"kubernetes.io/projected/112aead4-4563-494f-a3a6-eb7facee43f3-kube-api-access-tk8c5\") pod \"112aead4-4563-494f-a3a6-eb7facee43f3\" (UID: \"112aead4-4563-494f-a3a6-eb7facee43f3\") " Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.099693 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/112aead4-4563-494f-a3a6-eb7facee43f3-inventory\") pod \"112aead4-4563-494f-a3a6-eb7facee43f3\" (UID: \"112aead4-4563-494f-a3a6-eb7facee43f3\") " Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.116241 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112aead4-4563-494f-a3a6-eb7facee43f3-kube-api-access-tk8c5" (OuterVolumeSpecName: "kube-api-access-tk8c5") pod "112aead4-4563-494f-a3a6-eb7facee43f3" (UID: "112aead4-4563-494f-a3a6-eb7facee43f3"). InnerVolumeSpecName "kube-api-access-tk8c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.143533 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/112aead4-4563-494f-a3a6-eb7facee43f3-inventory" (OuterVolumeSpecName: "inventory") pod "112aead4-4563-494f-a3a6-eb7facee43f3" (UID: "112aead4-4563-494f-a3a6-eb7facee43f3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.151512 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/112aead4-4563-494f-a3a6-eb7facee43f3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "112aead4-4563-494f-a3a6-eb7facee43f3" (UID: "112aead4-4563-494f-a3a6-eb7facee43f3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.202020 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/112aead4-4563-494f-a3a6-eb7facee43f3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.202061 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk8c5\" (UniqueName: \"kubernetes.io/projected/112aead4-4563-494f-a3a6-eb7facee43f3-kube-api-access-tk8c5\") on node \"crc\" DevicePath \"\"" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.202078 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/112aead4-4563-494f-a3a6-eb7facee43f3-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.432747 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" event={"ID":"112aead4-4563-494f-a3a6-eb7facee43f3","Type":"ContainerDied","Data":"20fc1f9f799c688aac7b4778ccd2ff50c9d76622684192532822f037f039bfa8"} Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.432807 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20fc1f9f799c688aac7b4778ccd2ff50c9d76622684192532822f037f039bfa8" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.432882 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.573005 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l"] Nov 25 20:02:15 crc kubenswrapper[4775]: E1125 20:02:15.573683 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112aead4-4563-494f-a3a6-eb7facee43f3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.573713 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="112aead4-4563-494f-a3a6-eb7facee43f3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.574044 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="112aead4-4563-494f-a3a6-eb7facee43f3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.575152 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.577950 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.579103 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.579257 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.579283 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.591770 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l"] Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.718337 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9v9z\" (UniqueName: \"kubernetes.io/projected/f1164693-72ba-4636-9d6a-b89e5a6d2145-kube-api-access-g9v9z\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l\" (UID: \"f1164693-72ba-4636-9d6a-b89e5a6d2145\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.718480 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1164693-72ba-4636-9d6a-b89e5a6d2145-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l\" (UID: \"f1164693-72ba-4636-9d6a-b89e5a6d2145\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.718611 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1164693-72ba-4636-9d6a-b89e5a6d2145-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l\" (UID: \"f1164693-72ba-4636-9d6a-b89e5a6d2145\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.821183 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9v9z\" (UniqueName: \"kubernetes.io/projected/f1164693-72ba-4636-9d6a-b89e5a6d2145-kube-api-access-g9v9z\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l\" (UID: \"f1164693-72ba-4636-9d6a-b89e5a6d2145\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.821299 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1164693-72ba-4636-9d6a-b89e5a6d2145-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l\" (UID: \"f1164693-72ba-4636-9d6a-b89e5a6d2145\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.821378 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1164693-72ba-4636-9d6a-b89e5a6d2145-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l\" (UID: \"f1164693-72ba-4636-9d6a-b89e5a6d2145\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.825776 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1164693-72ba-4636-9d6a-b89e5a6d2145-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l\" (UID: \"f1164693-72ba-4636-9d6a-b89e5a6d2145\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.830625 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1164693-72ba-4636-9d6a-b89e5a6d2145-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l\" (UID: \"f1164693-72ba-4636-9d6a-b89e5a6d2145\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.843600 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9v9z\" (UniqueName: \"kubernetes.io/projected/f1164693-72ba-4636-9d6a-b89e5a6d2145-kube-api-access-g9v9z\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l\" (UID: \"f1164693-72ba-4636-9d6a-b89e5a6d2145\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" Nov 25 20:02:15 crc kubenswrapper[4775]: I1125 20:02:15.896988 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" Nov 25 20:02:16 crc kubenswrapper[4775]: I1125 20:02:16.253644 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l"] Nov 25 20:02:16 crc kubenswrapper[4775]: W1125 20:02:16.254986 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1164693_72ba_4636_9d6a_b89e5a6d2145.slice/crio-fb5d94539cd0a2c6cd2051fc1ed0f22bbaec682b37888d51127f1b2c5ec4dc85 WatchSource:0}: Error finding container fb5d94539cd0a2c6cd2051fc1ed0f22bbaec682b37888d51127f1b2c5ec4dc85: Status 404 returned error can't find the container with id fb5d94539cd0a2c6cd2051fc1ed0f22bbaec682b37888d51127f1b2c5ec4dc85 Nov 25 20:02:16 crc kubenswrapper[4775]: I1125 20:02:16.446049 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" event={"ID":"f1164693-72ba-4636-9d6a-b89e5a6d2145","Type":"ContainerStarted","Data":"fb5d94539cd0a2c6cd2051fc1ed0f22bbaec682b37888d51127f1b2c5ec4dc85"} Nov 25 20:02:17 crc kubenswrapper[4775]: I1125 20:02:17.473284 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" event={"ID":"f1164693-72ba-4636-9d6a-b89e5a6d2145","Type":"ContainerStarted","Data":"7733cb57da748ed6d49710745ceeebabf444a0246f1153b8f50bd9e39c5ac500"} Nov 25 20:02:17 crc kubenswrapper[4775]: I1125 20:02:17.504677 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" podStartSLOduration=2.011937805 podStartE2EDuration="2.504625182s" podCreationTimestamp="2025-11-25 20:02:15 +0000 UTC" firstStartedPulling="2025-11-25 20:02:16.257347241 +0000 UTC m=+1718.173709637" lastFinishedPulling="2025-11-25 20:02:16.750034618 +0000 UTC m=+1718.666397014" observedRunningTime="2025-11-25 20:02:17.499353079 +0000 UTC m=+1719.415715445" watchObservedRunningTime="2025-11-25 20:02:17.504625182 +0000 UTC m=+1719.420987578" Nov 25 20:02:21 crc kubenswrapper[4775]: I1125 20:02:21.567997 4775 generic.go:334] "Generic (PLEG): container finished" podID="f1164693-72ba-4636-9d6a-b89e5a6d2145" containerID="7733cb57da748ed6d49710745ceeebabf444a0246f1153b8f50bd9e39c5ac500" exitCode=0 Nov 25 20:02:21 crc kubenswrapper[4775]: I1125 20:02:21.568124 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" event={"ID":"f1164693-72ba-4636-9d6a-b89e5a6d2145","Type":"ContainerDied","Data":"7733cb57da748ed6d49710745ceeebabf444a0246f1153b8f50bd9e39c5ac500"} Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.075696 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.109710 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1164693-72ba-4636-9d6a-b89e5a6d2145-ssh-key\") pod \"f1164693-72ba-4636-9d6a-b89e5a6d2145\" (UID: \"f1164693-72ba-4636-9d6a-b89e5a6d2145\") " Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.109949 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9v9z\" (UniqueName: \"kubernetes.io/projected/f1164693-72ba-4636-9d6a-b89e5a6d2145-kube-api-access-g9v9z\") pod \"f1164693-72ba-4636-9d6a-b89e5a6d2145\" (UID: \"f1164693-72ba-4636-9d6a-b89e5a6d2145\") " Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.110046 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1164693-72ba-4636-9d6a-b89e5a6d2145-inventory\") pod \"f1164693-72ba-4636-9d6a-b89e5a6d2145\" (UID: \"f1164693-72ba-4636-9d6a-b89e5a6d2145\") " Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.122939 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1164693-72ba-4636-9d6a-b89e5a6d2145-kube-api-access-g9v9z" (OuterVolumeSpecName: "kube-api-access-g9v9z") pod "f1164693-72ba-4636-9d6a-b89e5a6d2145" (UID: "f1164693-72ba-4636-9d6a-b89e5a6d2145"). InnerVolumeSpecName "kube-api-access-g9v9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.134032 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1164693-72ba-4636-9d6a-b89e5a6d2145-inventory" (OuterVolumeSpecName: "inventory") pod "f1164693-72ba-4636-9d6a-b89e5a6d2145" (UID: "f1164693-72ba-4636-9d6a-b89e5a6d2145"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.162705 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1164693-72ba-4636-9d6a-b89e5a6d2145-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f1164693-72ba-4636-9d6a-b89e5a6d2145" (UID: "f1164693-72ba-4636-9d6a-b89e5a6d2145"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.212424 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1164693-72ba-4636-9d6a-b89e5a6d2145-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.212476 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1164693-72ba-4636-9d6a-b89e5a6d2145-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.212494 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9v9z\" (UniqueName: \"kubernetes.io/projected/f1164693-72ba-4636-9d6a-b89e5a6d2145-kube-api-access-g9v9z\") on node \"crc\" DevicePath \"\"" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.594974 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" event={"ID":"f1164693-72ba-4636-9d6a-b89e5a6d2145","Type":"ContainerDied","Data":"fb5d94539cd0a2c6cd2051fc1ed0f22bbaec682b37888d51127f1b2c5ec4dc85"} Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.595030 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb5d94539cd0a2c6cd2051fc1ed0f22bbaec682b37888d51127f1b2c5ec4dc85" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.595080 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.692556 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv"] Nov 25 20:02:23 crc kubenswrapper[4775]: E1125 20:02:23.693215 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1164693-72ba-4636-9d6a-b89e5a6d2145" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.693237 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1164693-72ba-4636-9d6a-b89e5a6d2145" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.693501 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1164693-72ba-4636-9d6a-b89e5a6d2145" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.694352 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.697076 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.697308 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.697476 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.697680 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.704730 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv"] Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.821376 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snbzs\" (UniqueName: \"kubernetes.io/projected/3405920b-f460-4a4a-ab4b-9a2e021954e8-kube-api-access-snbzs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv\" (UID: \"3405920b-f460-4a4a-ab4b-9a2e021954e8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.821697 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3405920b-f460-4a4a-ab4b-9a2e021954e8-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv\" (UID: \"3405920b-f460-4a4a-ab4b-9a2e021954e8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.821761 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3405920b-f460-4a4a-ab4b-9a2e021954e8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv\" (UID: \"3405920b-f460-4a4a-ab4b-9a2e021954e8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.923281 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3405920b-f460-4a4a-ab4b-9a2e021954e8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv\" (UID: \"3405920b-f460-4a4a-ab4b-9a2e021954e8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.923578 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snbzs\" (UniqueName: \"kubernetes.io/projected/3405920b-f460-4a4a-ab4b-9a2e021954e8-kube-api-access-snbzs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv\" (UID: \"3405920b-f460-4a4a-ab4b-9a2e021954e8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.923633 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3405920b-f460-4a4a-ab4b-9a2e021954e8-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv\" (UID: \"3405920b-f460-4a4a-ab4b-9a2e021954e8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.930390 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3405920b-f460-4a4a-ab4b-9a2e021954e8-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv\" (UID: \"3405920b-f460-4a4a-ab4b-9a2e021954e8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.932110 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3405920b-f460-4a4a-ab4b-9a2e021954e8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv\" (UID: \"3405920b-f460-4a4a-ab4b-9a2e021954e8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" Nov 25 20:02:23 crc kubenswrapper[4775]: I1125 20:02:23.947293 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snbzs\" (UniqueName: \"kubernetes.io/projected/3405920b-f460-4a4a-ab4b-9a2e021954e8-kube-api-access-snbzs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv\" (UID: \"3405920b-f460-4a4a-ab4b-9a2e021954e8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" Nov 25 20:02:24 crc kubenswrapper[4775]: I1125 20:02:24.016401 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" Nov 25 20:02:24 crc kubenswrapper[4775]: I1125 20:02:24.627511 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv"] Nov 25 20:02:24 crc kubenswrapper[4775]: I1125 20:02:24.847980 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:02:24 crc kubenswrapper[4775]: E1125 20:02:24.848502 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:02:25 crc kubenswrapper[4775]: I1125 20:02:25.621530 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" event={"ID":"3405920b-f460-4a4a-ab4b-9a2e021954e8","Type":"ContainerStarted","Data":"79b65ec4d0cdd98f40628c9c991d5c6e95dda74131836baf45a6aa392b2615f8"} Nov 25 20:02:25 crc kubenswrapper[4775]: I1125 20:02:25.621591 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" event={"ID":"3405920b-f460-4a4a-ab4b-9a2e021954e8","Type":"ContainerStarted","Data":"55fa4f035dd9efcc8c91b6acc83f7797d572fe81ebfbf79ebc30c297a8208dab"} Nov 25 20:02:25 crc kubenswrapper[4775]: I1125 20:02:25.652440 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" podStartSLOduration=2.08790112 podStartE2EDuration="2.652417327s" podCreationTimestamp="2025-11-25 20:02:23 +0000 UTC" firstStartedPulling="2025-11-25 20:02:24.641249302 +0000 UTC m=+1726.557611678" lastFinishedPulling="2025-11-25 20:02:25.205765509 +0000 UTC m=+1727.122127885" observedRunningTime="2025-11-25 20:02:25.649875539 +0000 UTC m=+1727.566237915" watchObservedRunningTime="2025-11-25 20:02:25.652417327 +0000 UTC m=+1727.568779703" Nov 25 20:02:29 crc kubenswrapper[4775]: I1125 20:02:29.047726 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7rf4s"] Nov 25 20:02:29 crc kubenswrapper[4775]: I1125 20:02:29.077795 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-bln65"] Nov 25 20:02:29 crc kubenswrapper[4775]: I1125 20:02:29.084979 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7rf4s"] Nov 25 20:02:29 crc kubenswrapper[4775]: I1125 20:02:29.092994 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-bln65"] Nov 25 20:02:30 crc kubenswrapper[4775]: I1125 20:02:30.030101 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6vb6j"] Nov 25 20:02:30 crc kubenswrapper[4775]: I1125 20:02:30.041426 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6vb6j"] Nov 25 20:02:30 crc kubenswrapper[4775]: I1125 20:02:30.862272 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61bc599e-4476-4c80-9749-97b489f52a22" path="/var/lib/kubelet/pods/61bc599e-4476-4c80-9749-97b489f52a22/volumes" Nov 25 20:02:30 crc kubenswrapper[4775]: I1125 20:02:30.863331 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5fc0506-2ee8-4325-92cd-06f4a5cb61e2" path="/var/lib/kubelet/pods/b5fc0506-2ee8-4325-92cd-06f4a5cb61e2/volumes" Nov 25 20:02:30 crc kubenswrapper[4775]: I1125 20:02:30.864083 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7477a30-1ee6-4d7e-83d2-7650c311ef6a" path="/var/lib/kubelet/pods/e7477a30-1ee6-4d7e-83d2-7650c311ef6a/volumes" Nov 25 20:02:37 crc kubenswrapper[4775]: I1125 20:02:37.039742 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xnrzx"] Nov 25 20:02:37 crc kubenswrapper[4775]: I1125 20:02:37.049065 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xnrzx"] Nov 25 20:02:38 crc kubenswrapper[4775]: I1125 20:02:38.871347 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f" path="/var/lib/kubelet/pods/7da24649-fbcd-4b7d-9c7e-ea4a3a006e6f/volumes" Nov 25 20:02:39 crc kubenswrapper[4775]: I1125 20:02:39.847088 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:02:39 crc kubenswrapper[4775]: E1125 20:02:39.848082 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:02:46 crc kubenswrapper[4775]: I1125 20:02:46.054026 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-p6t4d"] Nov 25 20:02:46 crc kubenswrapper[4775]: I1125 20:02:46.068771 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-p6t4d"] Nov 25 20:02:46 crc kubenswrapper[4775]: I1125 20:02:46.866385 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc8913d5-5107-422d-8554-1d8c951253fd" path="/var/lib/kubelet/pods/bc8913d5-5107-422d-8554-1d8c951253fd/volumes" Nov 25 20:02:49 crc kubenswrapper[4775]: I1125 20:02:49.180841 4775 scope.go:117] "RemoveContainer" containerID="fc714c37e611a72b165de6456e699bd6bbbde7d2a85d5e92737ac954f01e7d8b" Nov 25 20:02:49 crc kubenswrapper[4775]: I1125 20:02:49.246593 4775 scope.go:117] "RemoveContainer" containerID="ea6769a520ec6a84bd4ba8c5438d944ca0ee470bd235c1f5944f35378987f528" Nov 25 20:02:49 crc kubenswrapper[4775]: I1125 20:02:49.290247 4775 scope.go:117] "RemoveContainer" containerID="a49f3d3429050ce25af960286f05cce53c8850d86172437d97f22ae09683f77a" Nov 25 20:02:49 crc kubenswrapper[4775]: I1125 20:02:49.334901 4775 scope.go:117] "RemoveContainer" containerID="8377699798b0043f92dd3b8b71381aa0e025a66ecc06a5a996659e764c7bde5e" Nov 25 20:02:49 crc kubenswrapper[4775]: I1125 20:02:49.392904 4775 scope.go:117] "RemoveContainer" containerID="7fbe7ebac47f3b873475bc1a37b3df112bc9d1decebe9e2493c994ae345fdf52" Nov 25 20:02:49 crc kubenswrapper[4775]: I1125 20:02:49.437139 4775 scope.go:117] "RemoveContainer" containerID="71df4547b2f6de0e06a65d6289cf1d715015bbfb6a60aa2843987bd6833b7fcd" Nov 25 20:02:49 crc kubenswrapper[4775]: I1125 20:02:49.460166 4775 scope.go:117] "RemoveContainer" containerID="97d3fdc25f88e86a23ffa46a33a486e2278406293c1a7d902fc6c7c50d8725dc" Nov 25 20:02:49 crc kubenswrapper[4775]: I1125 20:02:49.496966 4775 scope.go:117] "RemoveContainer" containerID="53b57cdcbfadf01205a51fefc908bb3dfe709d7a0f68388e1fde383dd23a8d21" Nov 25 20:02:49 crc kubenswrapper[4775]: I1125 20:02:49.529523 4775 scope.go:117] "RemoveContainer" containerID="1110b5a72bf12a9b230e59dcd5b13a3af717d43db6dea5a764d2ef66c4ae7762" Nov 25 20:02:49 crc kubenswrapper[4775]: I1125 20:02:49.592536 4775 scope.go:117] "RemoveContainer" containerID="78c08c37208edf0dae69e53f83802f661b18c1a65bce906d8edcbac01227b2e1" Nov 25 20:02:49 crc kubenswrapper[4775]: I1125 20:02:49.633517 4775 scope.go:117] "RemoveContainer" containerID="4db6f50bf740bae07b01352ba6f443bc798bcb497062213ed6afc183fb31b4c6" Nov 25 20:02:49 crc kubenswrapper[4775]: I1125 20:02:49.661248 4775 scope.go:117] "RemoveContainer" containerID="94f5bf9c2755ae1bbc2e249ee8e6e89eedc18597c932e6b27f4d48382f10c491" Nov 25 20:02:52 crc kubenswrapper[4775]: I1125 20:02:52.847314 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:02:52 crc kubenswrapper[4775]: E1125 20:02:52.847887 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:03:07 crc kubenswrapper[4775]: I1125 20:03:07.847626 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:03:07 crc kubenswrapper[4775]: E1125 20:03:07.848450 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:03:19 crc kubenswrapper[4775]: I1125 20:03:19.080192 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-m7qwp"] Nov 25 20:03:19 crc kubenswrapper[4775]: I1125 20:03:19.107753 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-m7qwp"] Nov 25 20:03:20 crc kubenswrapper[4775]: I1125 20:03:20.031750 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-zrsbv"] Nov 25 20:03:20 crc kubenswrapper[4775]: I1125 20:03:20.042413 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-zrsbv"] Nov 25 20:03:20 crc kubenswrapper[4775]: I1125 20:03:20.866293 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2df0ca5f-5a5f-4186-8d30-66d7dabaa31c" path="/var/lib/kubelet/pods/2df0ca5f-5a5f-4186-8d30-66d7dabaa31c/volumes" Nov 25 20:03:20 crc kubenswrapper[4775]: I1125 20:03:20.868177 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d1bc0cf-f7e6-4621-9098-b78e16900a73" path="/var/lib/kubelet/pods/9d1bc0cf-f7e6-4621-9098-b78e16900a73/volumes" Nov 25 20:03:21 crc kubenswrapper[4775]: I1125 20:03:21.048863 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-025d-account-create-update-g9cmz"] Nov 25 20:03:21 crc kubenswrapper[4775]: I1125 20:03:21.063166 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-76p9s"] Nov 25 20:03:21 crc kubenswrapper[4775]: I1125 20:03:21.075303 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6a37-account-create-update-2pzhq"] Nov 25 20:03:21 crc kubenswrapper[4775]: I1125 20:03:21.084500 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3eb0-account-create-update-m25ts"] Nov 25 20:03:21 crc kubenswrapper[4775]: I1125 20:03:21.092858 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-025d-account-create-update-g9cmz"] Nov 25 20:03:21 crc kubenswrapper[4775]: I1125 20:03:21.100423 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-76p9s"] Nov 25 20:03:21 crc kubenswrapper[4775]: I1125 20:03:21.106923 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3eb0-account-create-update-m25ts"] Nov 25 20:03:21 crc kubenswrapper[4775]: I1125 20:03:21.113017 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6a37-account-create-update-2pzhq"] Nov 25 20:03:22 crc kubenswrapper[4775]: I1125 20:03:22.848477 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:03:22 crc kubenswrapper[4775]: E1125 20:03:22.849019 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:03:22 crc kubenswrapper[4775]: I1125 20:03:22.863175 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0504e23b-4c14-4256-b888-76415dbc9ae6" path="/var/lib/kubelet/pods/0504e23b-4c14-4256-b888-76415dbc9ae6/volumes" Nov 25 20:03:22 crc kubenswrapper[4775]: I1125 20:03:22.864197 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ccf429-6851-44f1-8370-b75877bbaa53" path="/var/lib/kubelet/pods/48ccf429-6851-44f1-8370-b75877bbaa53/volumes" Nov 25 20:03:22 crc kubenswrapper[4775]: I1125 20:03:22.864957 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="731e8277-1449-4718-af2c-ded02ecfe4a9" path="/var/lib/kubelet/pods/731e8277-1449-4718-af2c-ded02ecfe4a9/volumes" Nov 25 20:03:22 crc kubenswrapper[4775]: I1125 20:03:22.865610 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95839f5f-fa72-41a6-88aa-bfd01a5c6571" path="/var/lib/kubelet/pods/95839f5f-fa72-41a6-88aa-bfd01a5c6571/volumes" Nov 25 20:03:26 crc kubenswrapper[4775]: I1125 20:03:26.292008 4775 generic.go:334] "Generic (PLEG): container finished" podID="3405920b-f460-4a4a-ab4b-9a2e021954e8" containerID="79b65ec4d0cdd98f40628c9c991d5c6e95dda74131836baf45a6aa392b2615f8" exitCode=0 Nov 25 20:03:26 crc kubenswrapper[4775]: I1125 20:03:26.292114 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" event={"ID":"3405920b-f460-4a4a-ab4b-9a2e021954e8","Type":"ContainerDied","Data":"79b65ec4d0cdd98f40628c9c991d5c6e95dda74131836baf45a6aa392b2615f8"} Nov 25 20:03:27 crc kubenswrapper[4775]: I1125 20:03:27.893716 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" Nov 25 20:03:27 crc kubenswrapper[4775]: I1125 20:03:27.983304 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3405920b-f460-4a4a-ab4b-9a2e021954e8-ssh-key\") pod \"3405920b-f460-4a4a-ab4b-9a2e021954e8\" (UID: \"3405920b-f460-4a4a-ab4b-9a2e021954e8\") " Nov 25 20:03:27 crc kubenswrapper[4775]: I1125 20:03:27.983421 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3405920b-f460-4a4a-ab4b-9a2e021954e8-inventory\") pod \"3405920b-f460-4a4a-ab4b-9a2e021954e8\" (UID: \"3405920b-f460-4a4a-ab4b-9a2e021954e8\") " Nov 25 20:03:27 crc kubenswrapper[4775]: I1125 20:03:27.983440 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snbzs\" (UniqueName: \"kubernetes.io/projected/3405920b-f460-4a4a-ab4b-9a2e021954e8-kube-api-access-snbzs\") pod \"3405920b-f460-4a4a-ab4b-9a2e021954e8\" (UID: \"3405920b-f460-4a4a-ab4b-9a2e021954e8\") " Nov 25 20:03:27 crc kubenswrapper[4775]: I1125 20:03:27.996033 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3405920b-f460-4a4a-ab4b-9a2e021954e8-kube-api-access-snbzs" (OuterVolumeSpecName: "kube-api-access-snbzs") pod "3405920b-f460-4a4a-ab4b-9a2e021954e8" (UID: "3405920b-f460-4a4a-ab4b-9a2e021954e8"). InnerVolumeSpecName "kube-api-access-snbzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.029386 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3405920b-f460-4a4a-ab4b-9a2e021954e8-inventory" (OuterVolumeSpecName: "inventory") pod "3405920b-f460-4a4a-ab4b-9a2e021954e8" (UID: "3405920b-f460-4a4a-ab4b-9a2e021954e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.033539 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3405920b-f460-4a4a-ab4b-9a2e021954e8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3405920b-f460-4a4a-ab4b-9a2e021954e8" (UID: "3405920b-f460-4a4a-ab4b-9a2e021954e8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.085778 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3405920b-f460-4a4a-ab4b-9a2e021954e8-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.085823 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snbzs\" (UniqueName: \"kubernetes.io/projected/3405920b-f460-4a4a-ab4b-9a2e021954e8-kube-api-access-snbzs\") on node \"crc\" DevicePath \"\"" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.085845 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3405920b-f460-4a4a-ab4b-9a2e021954e8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.317559 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" event={"ID":"3405920b-f460-4a4a-ab4b-9a2e021954e8","Type":"ContainerDied","Data":"55fa4f035dd9efcc8c91b6acc83f7797d572fe81ebfbf79ebc30c297a8208dab"} Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.317624 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55fa4f035dd9efcc8c91b6acc83f7797d572fe81ebfbf79ebc30c297a8208dab" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.318108 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.446370 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qvmdp"] Nov 25 20:03:28 crc kubenswrapper[4775]: E1125 20:03:28.446754 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3405920b-f460-4a4a-ab4b-9a2e021954e8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.446775 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3405920b-f460-4a4a-ab4b-9a2e021954e8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.447014 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3405920b-f460-4a4a-ab4b-9a2e021954e8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.447600 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.450394 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.450502 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.450867 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.451492 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.468974 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qvmdp"] Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.599987 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a1b3d54-58bb-4800-9f16-e5df14a6d679-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qvmdp\" (UID: \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.600226 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psbnp\" (UniqueName: \"kubernetes.io/projected/8a1b3d54-58bb-4800-9f16-e5df14a6d679-kube-api-access-psbnp\") pod \"ssh-known-hosts-edpm-deployment-qvmdp\" (UID: \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.600353 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8a1b3d54-58bb-4800-9f16-e5df14a6d679-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qvmdp\" (UID: \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.702174 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a1b3d54-58bb-4800-9f16-e5df14a6d679-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qvmdp\" (UID: \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.702886 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psbnp\" (UniqueName: \"kubernetes.io/projected/8a1b3d54-58bb-4800-9f16-e5df14a6d679-kube-api-access-psbnp\") pod \"ssh-known-hosts-edpm-deployment-qvmdp\" (UID: \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.702972 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8a1b3d54-58bb-4800-9f16-e5df14a6d679-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qvmdp\" (UID: \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.705943 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a1b3d54-58bb-4800-9f16-e5df14a6d679-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qvmdp\" (UID: \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.707884 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8a1b3d54-58bb-4800-9f16-e5df14a6d679-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qvmdp\" (UID: \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.724487 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psbnp\" (UniqueName: \"kubernetes.io/projected/8a1b3d54-58bb-4800-9f16-e5df14a6d679-kube-api-access-psbnp\") pod \"ssh-known-hosts-edpm-deployment-qvmdp\" (UID: \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" Nov 25 20:03:28 crc kubenswrapper[4775]: I1125 20:03:28.780164 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" Nov 25 20:03:29 crc kubenswrapper[4775]: I1125 20:03:29.378898 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qvmdp"] Nov 25 20:03:30 crc kubenswrapper[4775]: I1125 20:03:30.338730 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" event={"ID":"8a1b3d54-58bb-4800-9f16-e5df14a6d679","Type":"ContainerStarted","Data":"bd82e87b329eea56a68faa2940eb531b31934998b927abb0b0df2a97c7f8e30b"} Nov 25 20:03:30 crc kubenswrapper[4775]: I1125 20:03:30.339333 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" event={"ID":"8a1b3d54-58bb-4800-9f16-e5df14a6d679","Type":"ContainerStarted","Data":"4b8c5d2738341fa7271407ff45244e916dda15160d8606a840b86335bfb44d21"} Nov 25 20:03:30 crc kubenswrapper[4775]: I1125 20:03:30.375396 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" podStartSLOduration=1.905782807 podStartE2EDuration="2.375375708s" podCreationTimestamp="2025-11-25 20:03:28 +0000 UTC" firstStartedPulling="2025-11-25 20:03:29.386538609 +0000 UTC m=+1791.302900985" lastFinishedPulling="2025-11-25 20:03:29.85613152 +0000 UTC m=+1791.772493886" observedRunningTime="2025-11-25 20:03:30.360064553 +0000 UTC m=+1792.276426959" watchObservedRunningTime="2025-11-25 20:03:30.375375708 +0000 UTC m=+1792.291738074" Nov 25 20:03:35 crc kubenswrapper[4775]: I1125 20:03:35.847844 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:03:35 crc kubenswrapper[4775]: E1125 20:03:35.849059 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:03:38 crc kubenswrapper[4775]: I1125 20:03:38.431551 4775 generic.go:334] "Generic (PLEG): container finished" podID="8a1b3d54-58bb-4800-9f16-e5df14a6d679" containerID="bd82e87b329eea56a68faa2940eb531b31934998b927abb0b0df2a97c7f8e30b" exitCode=0 Nov 25 20:03:38 crc kubenswrapper[4775]: I1125 20:03:38.431888 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" event={"ID":"8a1b3d54-58bb-4800-9f16-e5df14a6d679","Type":"ContainerDied","Data":"bd82e87b329eea56a68faa2940eb531b31934998b927abb0b0df2a97c7f8e30b"} Nov 25 20:03:39 crc kubenswrapper[4775]: I1125 20:03:39.962748 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.071374 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8a1b3d54-58bb-4800-9f16-e5df14a6d679-inventory-0\") pod \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\" (UID: \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\") " Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.071601 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psbnp\" (UniqueName: \"kubernetes.io/projected/8a1b3d54-58bb-4800-9f16-e5df14a6d679-kube-api-access-psbnp\") pod \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\" (UID: \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\") " Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.071780 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a1b3d54-58bb-4800-9f16-e5df14a6d679-ssh-key-openstack-edpm-ipam\") pod \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\" (UID: \"8a1b3d54-58bb-4800-9f16-e5df14a6d679\") " Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.078430 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a1b3d54-58bb-4800-9f16-e5df14a6d679-kube-api-access-psbnp" (OuterVolumeSpecName: "kube-api-access-psbnp") pod "8a1b3d54-58bb-4800-9f16-e5df14a6d679" (UID: "8a1b3d54-58bb-4800-9f16-e5df14a6d679"). InnerVolumeSpecName "kube-api-access-psbnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.108394 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1b3d54-58bb-4800-9f16-e5df14a6d679-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8a1b3d54-58bb-4800-9f16-e5df14a6d679" (UID: "8a1b3d54-58bb-4800-9f16-e5df14a6d679"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.117777 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1b3d54-58bb-4800-9f16-e5df14a6d679-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8a1b3d54-58bb-4800-9f16-e5df14a6d679" (UID: "8a1b3d54-58bb-4800-9f16-e5df14a6d679"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.173972 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psbnp\" (UniqueName: \"kubernetes.io/projected/8a1b3d54-58bb-4800-9f16-e5df14a6d679-kube-api-access-psbnp\") on node \"crc\" DevicePath \"\"" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.174009 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a1b3d54-58bb-4800-9f16-e5df14a6d679-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.174020 4775 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8a1b3d54-58bb-4800-9f16-e5df14a6d679-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.457919 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" event={"ID":"8a1b3d54-58bb-4800-9f16-e5df14a6d679","Type":"ContainerDied","Data":"4b8c5d2738341fa7271407ff45244e916dda15160d8606a840b86335bfb44d21"} Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.458198 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b8c5d2738341fa7271407ff45244e916dda15160d8606a840b86335bfb44d21" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.458042 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qvmdp" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.563821 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf"] Nov 25 20:03:40 crc kubenswrapper[4775]: E1125 20:03:40.564278 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1b3d54-58bb-4800-9f16-e5df14a6d679" containerName="ssh-known-hosts-edpm-deployment" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.564298 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1b3d54-58bb-4800-9f16-e5df14a6d679" containerName="ssh-known-hosts-edpm-deployment" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.564545 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a1b3d54-58bb-4800-9f16-e5df14a6d679" containerName="ssh-known-hosts-edpm-deployment" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.565396 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.572372 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.574427 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.575187 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.578156 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.583683 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z229\" (UniqueName: \"kubernetes.io/projected/ba722e81-0064-4edc-a2be-8bb407416475-kube-api-access-4z229\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fszmf\" (UID: \"ba722e81-0064-4edc-a2be-8bb407416475\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.583933 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba722e81-0064-4edc-a2be-8bb407416475-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fszmf\" (UID: \"ba722e81-0064-4edc-a2be-8bb407416475\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.584076 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba722e81-0064-4edc-a2be-8bb407416475-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fszmf\" (UID: \"ba722e81-0064-4edc-a2be-8bb407416475\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.592191 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf"] Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.686300 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z229\" (UniqueName: \"kubernetes.io/projected/ba722e81-0064-4edc-a2be-8bb407416475-kube-api-access-4z229\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fszmf\" (UID: \"ba722e81-0064-4edc-a2be-8bb407416475\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.686453 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba722e81-0064-4edc-a2be-8bb407416475-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fszmf\" (UID: \"ba722e81-0064-4edc-a2be-8bb407416475\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.686534 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba722e81-0064-4edc-a2be-8bb407416475-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fszmf\" (UID: \"ba722e81-0064-4edc-a2be-8bb407416475\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.692674 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba722e81-0064-4edc-a2be-8bb407416475-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fszmf\" (UID: \"ba722e81-0064-4edc-a2be-8bb407416475\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.693162 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba722e81-0064-4edc-a2be-8bb407416475-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fszmf\" (UID: \"ba722e81-0064-4edc-a2be-8bb407416475\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.707522 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z229\" (UniqueName: \"kubernetes.io/projected/ba722e81-0064-4edc-a2be-8bb407416475-kube-api-access-4z229\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fszmf\" (UID: \"ba722e81-0064-4edc-a2be-8bb407416475\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" Nov 25 20:03:40 crc kubenswrapper[4775]: I1125 20:03:40.885530 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" Nov 25 20:03:41 crc kubenswrapper[4775]: I1125 20:03:41.294978 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf"] Nov 25 20:03:41 crc kubenswrapper[4775]: I1125 20:03:41.479565 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" event={"ID":"ba722e81-0064-4edc-a2be-8bb407416475","Type":"ContainerStarted","Data":"771abc701b3da2b126a7cab1bf9624a1ee3392ea946e6c8aedf642b67d79ae20"} Nov 25 20:03:42 crc kubenswrapper[4775]: I1125 20:03:42.501896 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" event={"ID":"ba722e81-0064-4edc-a2be-8bb407416475","Type":"ContainerStarted","Data":"1b1f7c46777e4b941a4b23ab85653c1dc520e47750fee4f9fc439c6776740656"} Nov 25 20:03:42 crc kubenswrapper[4775]: I1125 20:03:42.528540 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" podStartSLOduration=2.082345771 podStartE2EDuration="2.528507997s" podCreationTimestamp="2025-11-25 20:03:40 +0000 UTC" firstStartedPulling="2025-11-25 20:03:41.304935479 +0000 UTC m=+1803.221297845" lastFinishedPulling="2025-11-25 20:03:41.751097675 +0000 UTC m=+1803.667460071" observedRunningTime="2025-11-25 20:03:42.527004626 +0000 UTC m=+1804.443367042" watchObservedRunningTime="2025-11-25 20:03:42.528507997 +0000 UTC m=+1804.444870393" Nov 25 20:03:47 crc kubenswrapper[4775]: I1125 20:03:47.064246 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gfhh4"] Nov 25 20:03:47 crc kubenswrapper[4775]: I1125 20:03:47.078027 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gfhh4"] Nov 25 20:03:47 crc kubenswrapper[4775]: I1125 20:03:47.944624 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:03:47 crc kubenswrapper[4775]: E1125 20:03:47.945167 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:03:48 crc kubenswrapper[4775]: I1125 20:03:48.862570 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3f2e969-6ce1-4720-a952-904324c3795c" path="/var/lib/kubelet/pods/a3f2e969-6ce1-4720-a952-904324c3795c/volumes" Nov 25 20:03:49 crc kubenswrapper[4775]: I1125 20:03:49.969696 4775 scope.go:117] "RemoveContainer" containerID="661166203c6c2c35526ca81182541281dc54660f35d0bd42530fef2716974ff9" Nov 25 20:03:49 crc kubenswrapper[4775]: I1125 20:03:49.999017 4775 scope.go:117] "RemoveContainer" containerID="d9babb4088c262ea3b3f2900253eaa6efa17639108a62bc7884435531c114894" Nov 25 20:03:50 crc kubenswrapper[4775]: I1125 20:03:50.086081 4775 scope.go:117] "RemoveContainer" containerID="0f513807f0f7afe4a27a21a2491b85d403760c882e2f04c135090e9d96c3aeb7" Nov 25 20:03:50 crc kubenswrapper[4775]: I1125 20:03:50.113109 4775 scope.go:117] "RemoveContainer" containerID="1fa0be2c96f86f9443d4d1b18a657c009f1519093ba208c8d9352159b94e8bb5" Nov 25 20:03:50 crc kubenswrapper[4775]: I1125 20:03:50.144837 4775 scope.go:117] "RemoveContainer" containerID="9804a1d98e005914775f5746f81df983be4e0b4afc5f1356cd883c8c1d5b5ceb" Nov 25 20:03:50 crc kubenswrapper[4775]: I1125 20:03:50.221986 4775 scope.go:117] "RemoveContainer" containerID="dac9aa04f4cbb44b49ec480df99b84c0f636fdcfdfcafe305d708c1fb805603e" Nov 25 20:03:50 crc kubenswrapper[4775]: I1125 20:03:50.240156 4775 scope.go:117] "RemoveContainer" containerID="55d1fc8d4d796fd0073c5037e3dc9aa4db434991abee62afa5bd6a014e4161fd" Nov 25 20:03:51 crc kubenswrapper[4775]: I1125 20:03:51.600418 4775 generic.go:334] "Generic (PLEG): container finished" podID="ba722e81-0064-4edc-a2be-8bb407416475" containerID="1b1f7c46777e4b941a4b23ab85653c1dc520e47750fee4f9fc439c6776740656" exitCode=0 Nov 25 20:03:51 crc kubenswrapper[4775]: I1125 20:03:51.600525 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" event={"ID":"ba722e81-0064-4edc-a2be-8bb407416475","Type":"ContainerDied","Data":"1b1f7c46777e4b941a4b23ab85653c1dc520e47750fee4f9fc439c6776740656"} Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.094148 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.168074 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba722e81-0064-4edc-a2be-8bb407416475-ssh-key\") pod \"ba722e81-0064-4edc-a2be-8bb407416475\" (UID: \"ba722e81-0064-4edc-a2be-8bb407416475\") " Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.168114 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z229\" (UniqueName: \"kubernetes.io/projected/ba722e81-0064-4edc-a2be-8bb407416475-kube-api-access-4z229\") pod \"ba722e81-0064-4edc-a2be-8bb407416475\" (UID: \"ba722e81-0064-4edc-a2be-8bb407416475\") " Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.168158 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba722e81-0064-4edc-a2be-8bb407416475-inventory\") pod \"ba722e81-0064-4edc-a2be-8bb407416475\" (UID: \"ba722e81-0064-4edc-a2be-8bb407416475\") " Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.176458 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba722e81-0064-4edc-a2be-8bb407416475-kube-api-access-4z229" (OuterVolumeSpecName: "kube-api-access-4z229") pod "ba722e81-0064-4edc-a2be-8bb407416475" (UID: "ba722e81-0064-4edc-a2be-8bb407416475"). InnerVolumeSpecName "kube-api-access-4z229". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.202276 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba722e81-0064-4edc-a2be-8bb407416475-inventory" (OuterVolumeSpecName: "inventory") pod "ba722e81-0064-4edc-a2be-8bb407416475" (UID: "ba722e81-0064-4edc-a2be-8bb407416475"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.213872 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba722e81-0064-4edc-a2be-8bb407416475-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ba722e81-0064-4edc-a2be-8bb407416475" (UID: "ba722e81-0064-4edc-a2be-8bb407416475"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.271181 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba722e81-0064-4edc-a2be-8bb407416475-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.271256 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z229\" (UniqueName: \"kubernetes.io/projected/ba722e81-0064-4edc-a2be-8bb407416475-kube-api-access-4z229\") on node \"crc\" DevicePath \"\"" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.271315 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba722e81-0064-4edc-a2be-8bb407416475-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.625245 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" event={"ID":"ba722e81-0064-4edc-a2be-8bb407416475","Type":"ContainerDied","Data":"771abc701b3da2b126a7cab1bf9624a1ee3392ea946e6c8aedf642b67d79ae20"} Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.625292 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="771abc701b3da2b126a7cab1bf9624a1ee3392ea946e6c8aedf642b67d79ae20" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.625316 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.739255 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc"] Nov 25 20:03:53 crc kubenswrapper[4775]: E1125 20:03:53.740001 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba722e81-0064-4edc-a2be-8bb407416475" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.740030 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba722e81-0064-4edc-a2be-8bb407416475" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.740438 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba722e81-0064-4edc-a2be-8bb407416475" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.741549 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.747433 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc"] Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.749321 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.749484 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.749561 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.749781 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.781964 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vspw\" (UniqueName: \"kubernetes.io/projected/a35cae08-0604-40db-8309-110e81787258-kube-api-access-4vspw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc\" (UID: \"a35cae08-0604-40db-8309-110e81787258\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.782035 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a35cae08-0604-40db-8309-110e81787258-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc\" (UID: \"a35cae08-0604-40db-8309-110e81787258\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.782305 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a35cae08-0604-40db-8309-110e81787258-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc\" (UID: \"a35cae08-0604-40db-8309-110e81787258\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.883465 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vspw\" (UniqueName: \"kubernetes.io/projected/a35cae08-0604-40db-8309-110e81787258-kube-api-access-4vspw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc\" (UID: \"a35cae08-0604-40db-8309-110e81787258\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.883505 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a35cae08-0604-40db-8309-110e81787258-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc\" (UID: \"a35cae08-0604-40db-8309-110e81787258\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.883656 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a35cae08-0604-40db-8309-110e81787258-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc\" (UID: \"a35cae08-0604-40db-8309-110e81787258\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.890275 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a35cae08-0604-40db-8309-110e81787258-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc\" (UID: \"a35cae08-0604-40db-8309-110e81787258\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.890720 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a35cae08-0604-40db-8309-110e81787258-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc\" (UID: \"a35cae08-0604-40db-8309-110e81787258\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" Nov 25 20:03:53 crc kubenswrapper[4775]: I1125 20:03:53.901637 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vspw\" (UniqueName: \"kubernetes.io/projected/a35cae08-0604-40db-8309-110e81787258-kube-api-access-4vspw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc\" (UID: \"a35cae08-0604-40db-8309-110e81787258\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" Nov 25 20:03:54 crc kubenswrapper[4775]: I1125 20:03:54.086110 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" Nov 25 20:03:54 crc kubenswrapper[4775]: I1125 20:03:54.644787 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc"] Nov 25 20:03:55 crc kubenswrapper[4775]: I1125 20:03:55.655533 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" event={"ID":"a35cae08-0604-40db-8309-110e81787258","Type":"ContainerStarted","Data":"a73328fa387252255ab35bcf877b8dc07c3fbe73e312e9d7351aea4f72df7621"} Nov 25 20:03:55 crc kubenswrapper[4775]: I1125 20:03:55.656225 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" event={"ID":"a35cae08-0604-40db-8309-110e81787258","Type":"ContainerStarted","Data":"22bb04559133e42215f850488ced89ed93eb7bc214ef2c3efce1a33f7c981db7"} Nov 25 20:03:55 crc kubenswrapper[4775]: I1125 20:03:55.676626 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" podStartSLOduration=2.218162862 podStartE2EDuration="2.676607591s" podCreationTimestamp="2025-11-25 20:03:53 +0000 UTC" firstStartedPulling="2025-11-25 20:03:54.648452754 +0000 UTC m=+1816.564815160" lastFinishedPulling="2025-11-25 20:03:55.106897533 +0000 UTC m=+1817.023259889" observedRunningTime="2025-11-25 20:03:55.675029158 +0000 UTC m=+1817.591391614" watchObservedRunningTime="2025-11-25 20:03:55.676607591 +0000 UTC m=+1817.592969967" Nov 25 20:03:56 crc kubenswrapper[4775]: I1125 20:03:56.247602 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5nsdq"] Nov 25 20:03:56 crc kubenswrapper[4775]: I1125 20:03:56.251384 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:03:56 crc kubenswrapper[4775]: I1125 20:03:56.262550 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nsdq"] Nov 25 20:03:56 crc kubenswrapper[4775]: I1125 20:03:56.333198 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76wch\" (UniqueName: \"kubernetes.io/projected/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-kube-api-access-76wch\") pod \"redhat-operators-5nsdq\" (UID: \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\") " pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:03:56 crc kubenswrapper[4775]: I1125 20:03:56.333271 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-catalog-content\") pod \"redhat-operators-5nsdq\" (UID: \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\") " pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:03:56 crc kubenswrapper[4775]: I1125 20:03:56.333336 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-utilities\") pod \"redhat-operators-5nsdq\" (UID: \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\") " pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:03:56 crc kubenswrapper[4775]: I1125 20:03:56.435422 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76wch\" (UniqueName: \"kubernetes.io/projected/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-kube-api-access-76wch\") pod \"redhat-operators-5nsdq\" (UID: \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\") " pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:03:56 crc kubenswrapper[4775]: I1125 20:03:56.435489 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-catalog-content\") pod \"redhat-operators-5nsdq\" (UID: \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\") " pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:03:56 crc kubenswrapper[4775]: I1125 20:03:56.435560 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-utilities\") pod \"redhat-operators-5nsdq\" (UID: \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\") " pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:03:56 crc kubenswrapper[4775]: I1125 20:03:56.436236 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-utilities\") pod \"redhat-operators-5nsdq\" (UID: \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\") " pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:03:56 crc kubenswrapper[4775]: I1125 20:03:56.436370 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-catalog-content\") pod \"redhat-operators-5nsdq\" (UID: \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\") " pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:03:56 crc kubenswrapper[4775]: I1125 20:03:56.464157 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76wch\" (UniqueName: \"kubernetes.io/projected/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-kube-api-access-76wch\") pod \"redhat-operators-5nsdq\" (UID: \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\") " pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:03:56 crc kubenswrapper[4775]: I1125 20:03:56.586911 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:03:57 crc kubenswrapper[4775]: I1125 20:03:57.042601 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nsdq"] Nov 25 20:03:57 crc kubenswrapper[4775]: W1125 20:03:57.043361 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd77f5c7a_5a1e_4a82_9519_4c2f4c1d491c.slice/crio-ab1c101dee6157d9179e09af1f4d761db76db2d551412a847d775f2a6cc6568d WatchSource:0}: Error finding container ab1c101dee6157d9179e09af1f4d761db76db2d551412a847d775f2a6cc6568d: Status 404 returned error can't find the container with id ab1c101dee6157d9179e09af1f4d761db76db2d551412a847d775f2a6cc6568d Nov 25 20:03:57 crc kubenswrapper[4775]: I1125 20:03:57.679894 4775 generic.go:334] "Generic (PLEG): container finished" podID="d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" containerID="eafb7b43fa439468daca99af2b5c3d36b3a360add337a79bd2c25c2096e4ff9f" exitCode=0 Nov 25 20:03:57 crc kubenswrapper[4775]: I1125 20:03:57.679954 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nsdq" event={"ID":"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c","Type":"ContainerDied","Data":"eafb7b43fa439468daca99af2b5c3d36b3a360add337a79bd2c25c2096e4ff9f"} Nov 25 20:03:57 crc kubenswrapper[4775]: I1125 20:03:57.680150 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nsdq" event={"ID":"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c","Type":"ContainerStarted","Data":"ab1c101dee6157d9179e09af1f4d761db76db2d551412a847d775f2a6cc6568d"} Nov 25 20:03:59 crc kubenswrapper[4775]: I1125 20:03:59.713232 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nsdq" event={"ID":"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c","Type":"ContainerStarted","Data":"80dc1418967c942e022016f1df628b38cdc173a9d28e4a4b4ee5ffb1eb9e4dc2"} Nov 25 20:04:00 crc kubenswrapper[4775]: I1125 20:04:00.732028 4775 generic.go:334] "Generic (PLEG): container finished" podID="d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" containerID="80dc1418967c942e022016f1df628b38cdc173a9d28e4a4b4ee5ffb1eb9e4dc2" exitCode=0 Nov 25 20:04:00 crc kubenswrapper[4775]: I1125 20:04:00.732097 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nsdq" event={"ID":"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c","Type":"ContainerDied","Data":"80dc1418967c942e022016f1df628b38cdc173a9d28e4a4b4ee5ffb1eb9e4dc2"} Nov 25 20:04:01 crc kubenswrapper[4775]: I1125 20:04:01.745860 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nsdq" event={"ID":"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c","Type":"ContainerStarted","Data":"4ff8abe4fb0de9eed410f9e54cf443f63791c2d193aa7028d0ae02858a2fb597"} Nov 25 20:04:01 crc kubenswrapper[4775]: I1125 20:04:01.765893 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5nsdq" podStartSLOduration=2.292820854 podStartE2EDuration="5.765868865s" podCreationTimestamp="2025-11-25 20:03:56 +0000 UTC" firstStartedPulling="2025-11-25 20:03:57.681262881 +0000 UTC m=+1819.597625247" lastFinishedPulling="2025-11-25 20:04:01.154310852 +0000 UTC m=+1823.070673258" observedRunningTime="2025-11-25 20:04:01.760133589 +0000 UTC m=+1823.676495965" watchObservedRunningTime="2025-11-25 20:04:01.765868865 +0000 UTC m=+1823.682231271" Nov 25 20:04:02 crc kubenswrapper[4775]: I1125 20:04:02.847531 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:04:02 crc kubenswrapper[4775]: E1125 20:04:02.848043 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:04:05 crc kubenswrapper[4775]: I1125 20:04:05.810369 4775 generic.go:334] "Generic (PLEG): container finished" podID="a35cae08-0604-40db-8309-110e81787258" containerID="a73328fa387252255ab35bcf877b8dc07c3fbe73e312e9d7351aea4f72df7621" exitCode=0 Nov 25 20:04:05 crc kubenswrapper[4775]: I1125 20:04:05.810466 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" event={"ID":"a35cae08-0604-40db-8309-110e81787258","Type":"ContainerDied","Data":"a73328fa387252255ab35bcf877b8dc07c3fbe73e312e9d7351aea4f72df7621"} Nov 25 20:04:06 crc kubenswrapper[4775]: I1125 20:04:06.587279 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:04:06 crc kubenswrapper[4775]: I1125 20:04:06.587662 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.297936 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.483427 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a35cae08-0604-40db-8309-110e81787258-ssh-key\") pod \"a35cae08-0604-40db-8309-110e81787258\" (UID: \"a35cae08-0604-40db-8309-110e81787258\") " Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.483887 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vspw\" (UniqueName: \"kubernetes.io/projected/a35cae08-0604-40db-8309-110e81787258-kube-api-access-4vspw\") pod \"a35cae08-0604-40db-8309-110e81787258\" (UID: \"a35cae08-0604-40db-8309-110e81787258\") " Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.483950 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a35cae08-0604-40db-8309-110e81787258-inventory\") pod \"a35cae08-0604-40db-8309-110e81787258\" (UID: \"a35cae08-0604-40db-8309-110e81787258\") " Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.490637 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a35cae08-0604-40db-8309-110e81787258-kube-api-access-4vspw" (OuterVolumeSpecName: "kube-api-access-4vspw") pod "a35cae08-0604-40db-8309-110e81787258" (UID: "a35cae08-0604-40db-8309-110e81787258"). InnerVolumeSpecName "kube-api-access-4vspw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.519824 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35cae08-0604-40db-8309-110e81787258-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a35cae08-0604-40db-8309-110e81787258" (UID: "a35cae08-0604-40db-8309-110e81787258"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.519890 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35cae08-0604-40db-8309-110e81787258-inventory" (OuterVolumeSpecName: "inventory") pod "a35cae08-0604-40db-8309-110e81787258" (UID: "a35cae08-0604-40db-8309-110e81787258"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.585824 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vspw\" (UniqueName: \"kubernetes.io/projected/a35cae08-0604-40db-8309-110e81787258-kube-api-access-4vspw\") on node \"crc\" DevicePath \"\"" Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.585874 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a35cae08-0604-40db-8309-110e81787258-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.585885 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a35cae08-0604-40db-8309-110e81787258-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.647735 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5nsdq" podUID="d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" containerName="registry-server" probeResult="failure" output=< Nov 25 20:04:07 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Nov 25 20:04:07 crc kubenswrapper[4775]: > Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.835534 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" event={"ID":"a35cae08-0604-40db-8309-110e81787258","Type":"ContainerDied","Data":"22bb04559133e42215f850488ced89ed93eb7bc214ef2c3efce1a33f7c981db7"} Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.835575 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22bb04559133e42215f850488ced89ed93eb7bc214ef2c3efce1a33f7c981db7" Nov 25 20:04:07 crc kubenswrapper[4775]: I1125 20:04:07.835616 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc" Nov 25 20:04:08 crc kubenswrapper[4775]: I1125 20:04:08.078432 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-xd9d5"] Nov 25 20:04:08 crc kubenswrapper[4775]: I1125 20:04:08.092783 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-xd9d5"] Nov 25 20:04:08 crc kubenswrapper[4775]: I1125 20:04:08.869798 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a74e1346-a6a0-455c-917a-fa611dc53263" path="/var/lib/kubelet/pods/a74e1346-a6a0-455c-917a-fa611dc53263/volumes" Nov 25 20:04:10 crc kubenswrapper[4775]: I1125 20:04:10.051315 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k596p"] Nov 25 20:04:10 crc kubenswrapper[4775]: I1125 20:04:10.062644 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k596p"] Nov 25 20:04:10 crc kubenswrapper[4775]: I1125 20:04:10.862259 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21bd9c8a-3662-40f1-b939-ed5320b65bcb" path="/var/lib/kubelet/pods/21bd9c8a-3662-40f1-b939-ed5320b65bcb/volumes" Nov 25 20:04:16 crc kubenswrapper[4775]: I1125 20:04:16.666772 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:04:16 crc kubenswrapper[4775]: I1125 20:04:16.719949 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:04:16 crc kubenswrapper[4775]: I1125 20:04:16.847369 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:04:16 crc kubenswrapper[4775]: E1125 20:04:16.847847 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:04:16 crc kubenswrapper[4775]: I1125 20:04:16.912399 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5nsdq"] Nov 25 20:04:17 crc kubenswrapper[4775]: I1125 20:04:17.946204 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5nsdq" podUID="d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" containerName="registry-server" containerID="cri-o://4ff8abe4fb0de9eed410f9e54cf443f63791c2d193aa7028d0ae02858a2fb597" gracePeriod=2 Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.475093 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.542050 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-catalog-content\") pod \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\" (UID: \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\") " Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.542102 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-utilities\") pod \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\" (UID: \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\") " Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.542147 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76wch\" (UniqueName: \"kubernetes.io/projected/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-kube-api-access-76wch\") pod \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\" (UID: \"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c\") " Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.543360 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-utilities" (OuterVolumeSpecName: "utilities") pod "d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" (UID: "d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.550745 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-kube-api-access-76wch" (OuterVolumeSpecName: "kube-api-access-76wch") pod "d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" (UID: "d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c"). InnerVolumeSpecName "kube-api-access-76wch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.619713 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" (UID: "d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.643952 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.643981 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.643991 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76wch\" (UniqueName: \"kubernetes.io/projected/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c-kube-api-access-76wch\") on node \"crc\" DevicePath \"\"" Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.959240 4775 generic.go:334] "Generic (PLEG): container finished" podID="d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" containerID="4ff8abe4fb0de9eed410f9e54cf443f63791c2d193aa7028d0ae02858a2fb597" exitCode=0 Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.959306 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nsdq" Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.959314 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nsdq" event={"ID":"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c","Type":"ContainerDied","Data":"4ff8abe4fb0de9eed410f9e54cf443f63791c2d193aa7028d0ae02858a2fb597"} Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.959379 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nsdq" event={"ID":"d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c","Type":"ContainerDied","Data":"ab1c101dee6157d9179e09af1f4d761db76db2d551412a847d775f2a6cc6568d"} Nov 25 20:04:18 crc kubenswrapper[4775]: I1125 20:04:18.959432 4775 scope.go:117] "RemoveContainer" containerID="4ff8abe4fb0de9eed410f9e54cf443f63791c2d193aa7028d0ae02858a2fb597" Nov 25 20:04:19 crc kubenswrapper[4775]: I1125 20:04:19.000113 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5nsdq"] Nov 25 20:04:19 crc kubenswrapper[4775]: I1125 20:04:19.004316 4775 scope.go:117] "RemoveContainer" containerID="80dc1418967c942e022016f1df628b38cdc173a9d28e4a4b4ee5ffb1eb9e4dc2" Nov 25 20:04:19 crc kubenswrapper[4775]: I1125 20:04:19.018453 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5nsdq"] Nov 25 20:04:19 crc kubenswrapper[4775]: I1125 20:04:19.040978 4775 scope.go:117] "RemoveContainer" containerID="eafb7b43fa439468daca99af2b5c3d36b3a360add337a79bd2c25c2096e4ff9f" Nov 25 20:04:19 crc kubenswrapper[4775]: I1125 20:04:19.092538 4775 scope.go:117] "RemoveContainer" containerID="4ff8abe4fb0de9eed410f9e54cf443f63791c2d193aa7028d0ae02858a2fb597" Nov 25 20:04:19 crc kubenswrapper[4775]: E1125 20:04:19.092955 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ff8abe4fb0de9eed410f9e54cf443f63791c2d193aa7028d0ae02858a2fb597\": container with ID starting with 4ff8abe4fb0de9eed410f9e54cf443f63791c2d193aa7028d0ae02858a2fb597 not found: ID does not exist" containerID="4ff8abe4fb0de9eed410f9e54cf443f63791c2d193aa7028d0ae02858a2fb597" Nov 25 20:04:19 crc kubenswrapper[4775]: I1125 20:04:19.092993 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ff8abe4fb0de9eed410f9e54cf443f63791c2d193aa7028d0ae02858a2fb597"} err="failed to get container status \"4ff8abe4fb0de9eed410f9e54cf443f63791c2d193aa7028d0ae02858a2fb597\": rpc error: code = NotFound desc = could not find container \"4ff8abe4fb0de9eed410f9e54cf443f63791c2d193aa7028d0ae02858a2fb597\": container with ID starting with 4ff8abe4fb0de9eed410f9e54cf443f63791c2d193aa7028d0ae02858a2fb597 not found: ID does not exist" Nov 25 20:04:19 crc kubenswrapper[4775]: I1125 20:04:19.093018 4775 scope.go:117] "RemoveContainer" containerID="80dc1418967c942e022016f1df628b38cdc173a9d28e4a4b4ee5ffb1eb9e4dc2" Nov 25 20:04:19 crc kubenswrapper[4775]: E1125 20:04:19.093443 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80dc1418967c942e022016f1df628b38cdc173a9d28e4a4b4ee5ffb1eb9e4dc2\": container with ID starting with 80dc1418967c942e022016f1df628b38cdc173a9d28e4a4b4ee5ffb1eb9e4dc2 not found: ID does not exist" containerID="80dc1418967c942e022016f1df628b38cdc173a9d28e4a4b4ee5ffb1eb9e4dc2" Nov 25 20:04:19 crc kubenswrapper[4775]: I1125 20:04:19.093474 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80dc1418967c942e022016f1df628b38cdc173a9d28e4a4b4ee5ffb1eb9e4dc2"} err="failed to get container status \"80dc1418967c942e022016f1df628b38cdc173a9d28e4a4b4ee5ffb1eb9e4dc2\": rpc error: code = NotFound desc = could not find container \"80dc1418967c942e022016f1df628b38cdc173a9d28e4a4b4ee5ffb1eb9e4dc2\": container with ID starting with 80dc1418967c942e022016f1df628b38cdc173a9d28e4a4b4ee5ffb1eb9e4dc2 not found: ID does not exist" Nov 25 20:04:19 crc kubenswrapper[4775]: I1125 20:04:19.093491 4775 scope.go:117] "RemoveContainer" containerID="eafb7b43fa439468daca99af2b5c3d36b3a360add337a79bd2c25c2096e4ff9f" Nov 25 20:04:19 crc kubenswrapper[4775]: E1125 20:04:19.093869 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eafb7b43fa439468daca99af2b5c3d36b3a360add337a79bd2c25c2096e4ff9f\": container with ID starting with eafb7b43fa439468daca99af2b5c3d36b3a360add337a79bd2c25c2096e4ff9f not found: ID does not exist" containerID="eafb7b43fa439468daca99af2b5c3d36b3a360add337a79bd2c25c2096e4ff9f" Nov 25 20:04:19 crc kubenswrapper[4775]: I1125 20:04:19.093898 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eafb7b43fa439468daca99af2b5c3d36b3a360add337a79bd2c25c2096e4ff9f"} err="failed to get container status \"eafb7b43fa439468daca99af2b5c3d36b3a360add337a79bd2c25c2096e4ff9f\": rpc error: code = NotFound desc = could not find container \"eafb7b43fa439468daca99af2b5c3d36b3a360add337a79bd2c25c2096e4ff9f\": container with ID starting with eafb7b43fa439468daca99af2b5c3d36b3a360add337a79bd2c25c2096e4ff9f not found: ID does not exist" Nov 25 20:04:20 crc kubenswrapper[4775]: I1125 20:04:20.863047 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" path="/var/lib/kubelet/pods/d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c/volumes" Nov 25 20:04:28 crc kubenswrapper[4775]: I1125 20:04:28.858620 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:04:28 crc kubenswrapper[4775]: E1125 20:04:28.859853 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:04:40 crc kubenswrapper[4775]: I1125 20:04:40.850520 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:04:40 crc kubenswrapper[4775]: E1125 20:04:40.851631 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:04:50 crc kubenswrapper[4775]: I1125 20:04:50.370976 4775 scope.go:117] "RemoveContainer" containerID="b0bb3eb38364026944a29a837a8392eaa34f4b373db67104c8728fb713c61bef" Nov 25 20:04:50 crc kubenswrapper[4775]: I1125 20:04:50.423573 4775 scope.go:117] "RemoveContainer" containerID="19cd16a1a81ca894740372fc1445068a61e7da0a75134d41872db2050e233d66" Nov 25 20:04:53 crc kubenswrapper[4775]: I1125 20:04:53.060423 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-4xj8s"] Nov 25 20:04:53 crc kubenswrapper[4775]: I1125 20:04:53.074036 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-4xj8s"] Nov 25 20:04:54 crc kubenswrapper[4775]: I1125 20:04:54.864371 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4911bcab-f90c-4d9c-b1e3-743dd010d664" path="/var/lib/kubelet/pods/4911bcab-f90c-4d9c-b1e3-743dd010d664/volumes" Nov 25 20:04:55 crc kubenswrapper[4775]: I1125 20:04:55.847557 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:04:56 crc kubenswrapper[4775]: I1125 20:04:56.393497 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"43c10e56638b61bef408264f80dccc089018bf2c0426ecc5f9c21c1f71057a16"} Nov 25 20:05:50 crc kubenswrapper[4775]: I1125 20:05:50.556856 4775 scope.go:117] "RemoveContainer" containerID="4331e6b340bebce29a13b3e58ce23af16e329a10e5298575914e47713ebb723f" Nov 25 20:07:11 crc kubenswrapper[4775]: I1125 20:07:11.070273 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:07:11 crc kubenswrapper[4775]: I1125 20:07:11.070936 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:07:41 crc kubenswrapper[4775]: I1125 20:07:41.070802 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:07:41 crc kubenswrapper[4775]: I1125 20:07:41.071492 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.393249 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.409689 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.419190 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk972"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.428646 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fszmf"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.438180 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.450625 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.463299 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.472233 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b8dkv"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.482711 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nfbws"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.490811 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dmn4l"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.499438 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.504357 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qvmdp"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.530961 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.540349 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qvmdp"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.548612 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.556319 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wzdzc"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.563317 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.569395 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vclbd"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.583907 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5dqlq"] Nov 25 20:07:57 crc kubenswrapper[4775]: I1125 20:07:57.591946 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g4tq7"] Nov 25 20:07:58 crc kubenswrapper[4775]: I1125 20:07:58.863539 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="112aead4-4563-494f-a3a6-eb7facee43f3" path="/var/lib/kubelet/pods/112aead4-4563-494f-a3a6-eb7facee43f3/volumes" Nov 25 20:07:58 crc kubenswrapper[4775]: I1125 20:07:58.864938 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3405920b-f460-4a4a-ab4b-9a2e021954e8" path="/var/lib/kubelet/pods/3405920b-f460-4a4a-ab4b-9a2e021954e8/volumes" Nov 25 20:07:58 crc kubenswrapper[4775]: I1125 20:07:58.865966 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695faa37-300a-4e2a-b516-a5053d5663dc" path="/var/lib/kubelet/pods/695faa37-300a-4e2a-b516-a5053d5663dc/volumes" Nov 25 20:07:58 crc kubenswrapper[4775]: I1125 20:07:58.867061 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84915a86-089c-4a6e-9890-1758d0912875" path="/var/lib/kubelet/pods/84915a86-089c-4a6e-9890-1758d0912875/volumes" Nov 25 20:07:58 crc kubenswrapper[4775]: I1125 20:07:58.869255 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a1b3d54-58bb-4800-9f16-e5df14a6d679" path="/var/lib/kubelet/pods/8a1b3d54-58bb-4800-9f16-e5df14a6d679/volumes" Nov 25 20:07:58 crc kubenswrapper[4775]: I1125 20:07:58.870268 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a35cae08-0604-40db-8309-110e81787258" path="/var/lib/kubelet/pods/a35cae08-0604-40db-8309-110e81787258/volumes" Nov 25 20:07:58 crc kubenswrapper[4775]: I1125 20:07:58.871290 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b015afc2-1bb1-4ea6-8c16-317c1b285470" path="/var/lib/kubelet/pods/b015afc2-1bb1-4ea6-8c16-317c1b285470/volumes" Nov 25 20:07:58 crc kubenswrapper[4775]: I1125 20:07:58.872360 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5e07a3a-fdc4-4fd1-9039-181cc3ae0464" path="/var/lib/kubelet/pods/b5e07a3a-fdc4-4fd1-9039-181cc3ae0464/volumes" Nov 25 20:07:58 crc kubenswrapper[4775]: I1125 20:07:58.874598 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba722e81-0064-4edc-a2be-8bb407416475" path="/var/lib/kubelet/pods/ba722e81-0064-4edc-a2be-8bb407416475/volumes" Nov 25 20:07:58 crc kubenswrapper[4775]: I1125 20:07:58.875836 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1164693-72ba-4636-9d6a-b89e5a6d2145" path="/var/lib/kubelet/pods/f1164693-72ba-4636-9d6a-b89e5a6d2145/volumes" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.073629 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8"] Nov 25 20:08:03 crc kubenswrapper[4775]: E1125 20:08:03.074421 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" containerName="extract-utilities" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.074436 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" containerName="extract-utilities" Nov 25 20:08:03 crc kubenswrapper[4775]: E1125 20:08:03.074445 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" containerName="registry-server" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.074454 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" containerName="registry-server" Nov 25 20:08:03 crc kubenswrapper[4775]: E1125 20:08:03.074478 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35cae08-0604-40db-8309-110e81787258" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.074487 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35cae08-0604-40db-8309-110e81787258" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:08:03 crc kubenswrapper[4775]: E1125 20:08:03.074510 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" containerName="extract-content" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.074517 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" containerName="extract-content" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.074755 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d77f5c7a-5a1e-4a82-9519-4c2f4c1d491c" containerName="registry-server" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.074778 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35cae08-0604-40db-8309-110e81787258" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.075440 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.077487 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.078819 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.079141 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.079996 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.082578 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.095220 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8"] Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.163489 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.163558 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.163730 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4599\" (UniqueName: \"kubernetes.io/projected/9057fb85-d24d-4016-ac68-44e9e52440dd-kube-api-access-p4599\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.163808 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.163876 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.265008 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4599\" (UniqueName: \"kubernetes.io/projected/9057fb85-d24d-4016-ac68-44e9e52440dd-kube-api-access-p4599\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.265143 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.265221 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.265314 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.265357 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.271987 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.271992 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.272604 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.274537 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.292010 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4599\" (UniqueName: \"kubernetes.io/projected/9057fb85-d24d-4016-ac68-44e9e52440dd-kube-api-access-p4599\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.405782 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.758596 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8"] Nov 25 20:08:03 crc kubenswrapper[4775]: I1125 20:08:03.766929 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:08:04 crc kubenswrapper[4775]: I1125 20:08:04.294714 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" event={"ID":"9057fb85-d24d-4016-ac68-44e9e52440dd","Type":"ContainerStarted","Data":"ea6212d04a5b35b33241f5db603db154a7a424c6d724e4b6c1152e2fa404e465"} Nov 25 20:08:05 crc kubenswrapper[4775]: I1125 20:08:05.305767 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" event={"ID":"9057fb85-d24d-4016-ac68-44e9e52440dd","Type":"ContainerStarted","Data":"f8d02dbcf0efedaad0c84673ef4083f5db68064f28a005334f49cb660199b265"} Nov 25 20:08:05 crc kubenswrapper[4775]: I1125 20:08:05.336350 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" podStartSLOduration=1.819602538 podStartE2EDuration="2.336319657s" podCreationTimestamp="2025-11-25 20:08:03 +0000 UTC" firstStartedPulling="2025-11-25 20:08:03.766596556 +0000 UTC m=+2065.682958942" lastFinishedPulling="2025-11-25 20:08:04.283313695 +0000 UTC m=+2066.199676061" observedRunningTime="2025-11-25 20:08:05.321336509 +0000 UTC m=+2067.237698915" watchObservedRunningTime="2025-11-25 20:08:05.336319657 +0000 UTC m=+2067.252682063" Nov 25 20:08:11 crc kubenswrapper[4775]: I1125 20:08:11.070895 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:08:11 crc kubenswrapper[4775]: I1125 20:08:11.071373 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:08:11 crc kubenswrapper[4775]: I1125 20:08:11.071424 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 20:08:11 crc kubenswrapper[4775]: I1125 20:08:11.072342 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43c10e56638b61bef408264f80dccc089018bf2c0426ecc5f9c21c1f71057a16"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:08:11 crc kubenswrapper[4775]: I1125 20:08:11.072412 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://43c10e56638b61bef408264f80dccc089018bf2c0426ecc5f9c21c1f71057a16" gracePeriod=600 Nov 25 20:08:11 crc kubenswrapper[4775]: I1125 20:08:11.368709 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="43c10e56638b61bef408264f80dccc089018bf2c0426ecc5f9c21c1f71057a16" exitCode=0 Nov 25 20:08:11 crc kubenswrapper[4775]: I1125 20:08:11.368807 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"43c10e56638b61bef408264f80dccc089018bf2c0426ecc5f9c21c1f71057a16"} Nov 25 20:08:11 crc kubenswrapper[4775]: I1125 20:08:11.369508 4775 scope.go:117] "RemoveContainer" containerID="7fc983aa541c778348ff2648198c8ea157052ceee6be3181453842b232452743" Nov 25 20:08:12 crc kubenswrapper[4775]: I1125 20:08:12.383724 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717"} Nov 25 20:08:16 crc kubenswrapper[4775]: I1125 20:08:16.425818 4775 generic.go:334] "Generic (PLEG): container finished" podID="9057fb85-d24d-4016-ac68-44e9e52440dd" containerID="f8d02dbcf0efedaad0c84673ef4083f5db68064f28a005334f49cb660199b265" exitCode=0 Nov 25 20:08:16 crc kubenswrapper[4775]: I1125 20:08:16.425914 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" event={"ID":"9057fb85-d24d-4016-ac68-44e9e52440dd","Type":"ContainerDied","Data":"f8d02dbcf0efedaad0c84673ef4083f5db68064f28a005334f49cb660199b265"} Nov 25 20:08:17 crc kubenswrapper[4775]: I1125 20:08:17.856874 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:17 crc kubenswrapper[4775]: I1125 20:08:17.988442 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-inventory\") pod \"9057fb85-d24d-4016-ac68-44e9e52440dd\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " Nov 25 20:08:17 crc kubenswrapper[4775]: I1125 20:08:17.988610 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-ceph\") pod \"9057fb85-d24d-4016-ac68-44e9e52440dd\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " Nov 25 20:08:17 crc kubenswrapper[4775]: I1125 20:08:17.988764 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-ssh-key\") pod \"9057fb85-d24d-4016-ac68-44e9e52440dd\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " Nov 25 20:08:17 crc kubenswrapper[4775]: I1125 20:08:17.988815 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4599\" (UniqueName: \"kubernetes.io/projected/9057fb85-d24d-4016-ac68-44e9e52440dd-kube-api-access-p4599\") pod \"9057fb85-d24d-4016-ac68-44e9e52440dd\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " Nov 25 20:08:17 crc kubenswrapper[4775]: I1125 20:08:17.988875 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-repo-setup-combined-ca-bundle\") pod \"9057fb85-d24d-4016-ac68-44e9e52440dd\" (UID: \"9057fb85-d24d-4016-ac68-44e9e52440dd\") " Nov 25 20:08:17 crc kubenswrapper[4775]: I1125 20:08:17.994881 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9057fb85-d24d-4016-ac68-44e9e52440dd" (UID: "9057fb85-d24d-4016-ac68-44e9e52440dd"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:08:17 crc kubenswrapper[4775]: I1125 20:08:17.994898 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9057fb85-d24d-4016-ac68-44e9e52440dd-kube-api-access-p4599" (OuterVolumeSpecName: "kube-api-access-p4599") pod "9057fb85-d24d-4016-ac68-44e9e52440dd" (UID: "9057fb85-d24d-4016-ac68-44e9e52440dd"). InnerVolumeSpecName "kube-api-access-p4599". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:08:17 crc kubenswrapper[4775]: I1125 20:08:17.995741 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-ceph" (OuterVolumeSpecName: "ceph") pod "9057fb85-d24d-4016-ac68-44e9e52440dd" (UID: "9057fb85-d24d-4016-ac68-44e9e52440dd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.042812 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9057fb85-d24d-4016-ac68-44e9e52440dd" (UID: "9057fb85-d24d-4016-ac68-44e9e52440dd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.092816 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.092848 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4599\" (UniqueName: \"kubernetes.io/projected/9057fb85-d24d-4016-ac68-44e9e52440dd-kube-api-access-p4599\") on node \"crc\" DevicePath \"\"" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.092862 4775 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.092871 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.095206 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-inventory" (OuterVolumeSpecName: "inventory") pod "9057fb85-d24d-4016-ac68-44e9e52440dd" (UID: "9057fb85-d24d-4016-ac68-44e9e52440dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.194048 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9057fb85-d24d-4016-ac68-44e9e52440dd-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.448764 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" event={"ID":"9057fb85-d24d-4016-ac68-44e9e52440dd","Type":"ContainerDied","Data":"ea6212d04a5b35b33241f5db603db154a7a424c6d724e4b6c1152e2fa404e465"} Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.448812 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea6212d04a5b35b33241f5db603db154a7a424c6d724e4b6c1152e2fa404e465" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.448865 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.542249 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8"] Nov 25 20:08:18 crc kubenswrapper[4775]: E1125 20:08:18.542714 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9057fb85-d24d-4016-ac68-44e9e52440dd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.542733 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9057fb85-d24d-4016-ac68-44e9e52440dd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.542960 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9057fb85-d24d-4016-ac68-44e9e52440dd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.543726 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.547089 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.547712 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.547931 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.548561 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.551698 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.562579 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8"] Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.715422 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.715499 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.715595 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcqps\" (UniqueName: \"kubernetes.io/projected/8fd008ce-79f9-4041-ad04-856eba5e0536-kube-api-access-dcqps\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.715671 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.715741 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.817337 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.817453 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.817518 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.817544 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcqps\" (UniqueName: \"kubernetes.io/projected/8fd008ce-79f9-4041-ad04-856eba5e0536-kube-api-access-dcqps\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.817597 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.821972 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.822504 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.824960 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.832996 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.842578 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcqps\" (UniqueName: \"kubernetes.io/projected/8fd008ce-79f9-4041-ad04-856eba5e0536-kube-api-access-dcqps\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:18 crc kubenswrapper[4775]: I1125 20:08:18.894511 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:08:19 crc kubenswrapper[4775]: I1125 20:08:19.433718 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8"] Nov 25 20:08:19 crc kubenswrapper[4775]: I1125 20:08:19.460539 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" event={"ID":"8fd008ce-79f9-4041-ad04-856eba5e0536","Type":"ContainerStarted","Data":"489175785a2feeda6e71f892088a5fd650b384a6eded9b712ee79651f6e45d76"} Nov 25 20:08:20 crc kubenswrapper[4775]: I1125 20:08:20.477567 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" event={"ID":"8fd008ce-79f9-4041-ad04-856eba5e0536","Type":"ContainerStarted","Data":"0526d157d4ec1493d35783bcb4a4fc660e91b18aa1c4e8f9daad3313cd372479"} Nov 25 20:08:20 crc kubenswrapper[4775]: I1125 20:08:20.507936 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" podStartSLOduration=1.9867041460000001 podStartE2EDuration="2.507912587s" podCreationTimestamp="2025-11-25 20:08:18 +0000 UTC" firstStartedPulling="2025-11-25 20:08:19.440116205 +0000 UTC m=+2081.356478571" lastFinishedPulling="2025-11-25 20:08:19.961324606 +0000 UTC m=+2081.877687012" observedRunningTime="2025-11-25 20:08:20.500579558 +0000 UTC m=+2082.416941934" watchObservedRunningTime="2025-11-25 20:08:20.507912587 +0000 UTC m=+2082.424274973" Nov 25 20:08:50 crc kubenswrapper[4775]: I1125 20:08:50.718384 4775 scope.go:117] "RemoveContainer" containerID="a5492b7d3899805941f7086d746b8e0490a39870dd419db30a7dd3eb0373c9d3" Nov 25 20:08:50 crc kubenswrapper[4775]: I1125 20:08:50.801941 4775 scope.go:117] "RemoveContainer" containerID="79b65ec4d0cdd98f40628c9c991d5c6e95dda74131836baf45a6aa392b2615f8" Nov 25 20:08:50 crc kubenswrapper[4775]: I1125 20:08:50.868633 4775 scope.go:117] "RemoveContainer" containerID="7733cb57da748ed6d49710745ceeebabf444a0246f1153b8f50bd9e39c5ac500" Nov 25 20:08:50 crc kubenswrapper[4775]: I1125 20:08:50.902978 4775 scope.go:117] "RemoveContainer" containerID="fd36e15664485fcc3f4745627cc0d8102b61e9506ae49a9f703b5a359281b7f9" Nov 25 20:08:50 crc kubenswrapper[4775]: I1125 20:08:50.968023 4775 scope.go:117] "RemoveContainer" containerID="74bd276437fa9e3870f16c4b0c9f533e89d3c06ffcb34fbc21df39187915862a" Nov 25 20:08:51 crc kubenswrapper[4775]: I1125 20:08:51.004938 4775 scope.go:117] "RemoveContainer" containerID="eda68ae99d02cc4ec32cf932d8d403d0503414e63f2f02108c50bc481c1403a6" Nov 25 20:08:51 crc kubenswrapper[4775]: I1125 20:08:51.045853 4775 scope.go:117] "RemoveContainer" containerID="dc91a2dc8add14b559abd477f7a75a2d4b29e530fdf6c374d3a1022e03c04120" Nov 25 20:09:51 crc kubenswrapper[4775]: I1125 20:09:51.249581 4775 scope.go:117] "RemoveContainer" containerID="1b1f7c46777e4b941a4b23ab85653c1dc520e47750fee4f9fc439c6776740656" Nov 25 20:09:51 crc kubenswrapper[4775]: I1125 20:09:51.293355 4775 scope.go:117] "RemoveContainer" containerID="bd82e87b329eea56a68faa2940eb531b31934998b927abb0b0df2a97c7f8e30b" Nov 25 20:10:00 crc kubenswrapper[4775]: I1125 20:10:00.510300 4775 generic.go:334] "Generic (PLEG): container finished" podID="8fd008ce-79f9-4041-ad04-856eba5e0536" containerID="0526d157d4ec1493d35783bcb4a4fc660e91b18aa1c4e8f9daad3313cd372479" exitCode=0 Nov 25 20:10:00 crc kubenswrapper[4775]: I1125 20:10:00.510442 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" event={"ID":"8fd008ce-79f9-4041-ad04-856eba5e0536","Type":"ContainerDied","Data":"0526d157d4ec1493d35783bcb4a4fc660e91b18aa1c4e8f9daad3313cd372479"} Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.033550 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.159509 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-ceph\") pod \"8fd008ce-79f9-4041-ad04-856eba5e0536\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.159548 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-inventory\") pod \"8fd008ce-79f9-4041-ad04-856eba5e0536\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.159618 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-bootstrap-combined-ca-bundle\") pod \"8fd008ce-79f9-4041-ad04-856eba5e0536\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.160286 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-ssh-key\") pod \"8fd008ce-79f9-4041-ad04-856eba5e0536\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.160341 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcqps\" (UniqueName: \"kubernetes.io/projected/8fd008ce-79f9-4041-ad04-856eba5e0536-kube-api-access-dcqps\") pod \"8fd008ce-79f9-4041-ad04-856eba5e0536\" (UID: \"8fd008ce-79f9-4041-ad04-856eba5e0536\") " Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.165935 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "8fd008ce-79f9-4041-ad04-856eba5e0536" (UID: "8fd008ce-79f9-4041-ad04-856eba5e0536"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.166876 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd008ce-79f9-4041-ad04-856eba5e0536-kube-api-access-dcqps" (OuterVolumeSpecName: "kube-api-access-dcqps") pod "8fd008ce-79f9-4041-ad04-856eba5e0536" (UID: "8fd008ce-79f9-4041-ad04-856eba5e0536"). InnerVolumeSpecName "kube-api-access-dcqps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.176763 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-ceph" (OuterVolumeSpecName: "ceph") pod "8fd008ce-79f9-4041-ad04-856eba5e0536" (UID: "8fd008ce-79f9-4041-ad04-856eba5e0536"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.200073 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8fd008ce-79f9-4041-ad04-856eba5e0536" (UID: "8fd008ce-79f9-4041-ad04-856eba5e0536"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.207329 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-inventory" (OuterVolumeSpecName: "inventory") pod "8fd008ce-79f9-4041-ad04-856eba5e0536" (UID: "8fd008ce-79f9-4041-ad04-856eba5e0536"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.263285 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.263348 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.263376 4775 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.263402 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8fd008ce-79f9-4041-ad04-856eba5e0536-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.263427 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcqps\" (UniqueName: \"kubernetes.io/projected/8fd008ce-79f9-4041-ad04-856eba5e0536-kube-api-access-dcqps\") on node \"crc\" DevicePath \"\"" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.553438 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" event={"ID":"8fd008ce-79f9-4041-ad04-856eba5e0536","Type":"ContainerDied","Data":"489175785a2feeda6e71f892088a5fd650b384a6eded9b712ee79651f6e45d76"} Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.553510 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="489175785a2feeda6e71f892088a5fd650b384a6eded9b712ee79651f6e45d76" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.553643 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.675432 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl"] Nov 25 20:10:02 crc kubenswrapper[4775]: E1125 20:10:02.676194 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd008ce-79f9-4041-ad04-856eba5e0536" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.676312 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd008ce-79f9-4041-ad04-856eba5e0536" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.676676 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd008ce-79f9-4041-ad04-856eba5e0536" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.677503 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.681799 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.683056 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.683621 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.684239 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.684544 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.691976 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl"] Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.775573 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9tk\" (UniqueName: \"kubernetes.io/projected/9e54b6d7-5c5a-498c-868e-e7a35b93b448-kube-api-access-zc9tk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.775855 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.775961 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.776226 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.878472 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.878520 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.878584 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.878669 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9tk\" (UniqueName: \"kubernetes.io/projected/9e54b6d7-5c5a-498c-868e-e7a35b93b448-kube-api-access-zc9tk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.883209 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.883772 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.886280 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:02 crc kubenswrapper[4775]: I1125 20:10:02.907591 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9tk\" (UniqueName: \"kubernetes.io/projected/9e54b6d7-5c5a-498c-868e-e7a35b93b448-kube-api-access-zc9tk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:03 crc kubenswrapper[4775]: I1125 20:10:03.003012 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:03 crc kubenswrapper[4775]: I1125 20:10:03.382383 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl"] Nov 25 20:10:03 crc kubenswrapper[4775]: I1125 20:10:03.564670 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" event={"ID":"9e54b6d7-5c5a-498c-868e-e7a35b93b448","Type":"ContainerStarted","Data":"14ee910c2d30768d5d77d241a106edc087e9a6ebda6db8aefce0515341862fe3"} Nov 25 20:10:04 crc kubenswrapper[4775]: I1125 20:10:04.579236 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" event={"ID":"9e54b6d7-5c5a-498c-868e-e7a35b93b448","Type":"ContainerStarted","Data":"14757b3fb02d8383c7ef5c27e56ee8125de675ebb250be819592f20ae60dd19e"} Nov 25 20:10:04 crc kubenswrapper[4775]: I1125 20:10:04.610348 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" podStartSLOduration=2.1544216 podStartE2EDuration="2.610326558s" podCreationTimestamp="2025-11-25 20:10:02 +0000 UTC" firstStartedPulling="2025-11-25 20:10:03.395569141 +0000 UTC m=+2185.311931527" lastFinishedPulling="2025-11-25 20:10:03.851474069 +0000 UTC m=+2185.767836485" observedRunningTime="2025-11-25 20:10:04.598920448 +0000 UTC m=+2186.515282854" watchObservedRunningTime="2025-11-25 20:10:04.610326558 +0000 UTC m=+2186.526688934" Nov 25 20:10:11 crc kubenswrapper[4775]: I1125 20:10:11.069989 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:10:11 crc kubenswrapper[4775]: I1125 20:10:11.070726 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:10:31 crc kubenswrapper[4775]: I1125 20:10:31.872431 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e54b6d7-5c5a-498c-868e-e7a35b93b448" containerID="14757b3fb02d8383c7ef5c27e56ee8125de675ebb250be819592f20ae60dd19e" exitCode=0 Nov 25 20:10:31 crc kubenswrapper[4775]: I1125 20:10:31.872541 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" event={"ID":"9e54b6d7-5c5a-498c-868e-e7a35b93b448","Type":"ContainerDied","Data":"14757b3fb02d8383c7ef5c27e56ee8125de675ebb250be819592f20ae60dd19e"} Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.431063 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.457347 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc9tk\" (UniqueName: \"kubernetes.io/projected/9e54b6d7-5c5a-498c-868e-e7a35b93b448-kube-api-access-zc9tk\") pod \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.457439 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-ssh-key\") pod \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.457496 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-inventory\") pod \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.457576 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-ceph\") pod \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\" (UID: \"9e54b6d7-5c5a-498c-868e-e7a35b93b448\") " Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.463809 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e54b6d7-5c5a-498c-868e-e7a35b93b448-kube-api-access-zc9tk" (OuterVolumeSpecName: "kube-api-access-zc9tk") pod "9e54b6d7-5c5a-498c-868e-e7a35b93b448" (UID: "9e54b6d7-5c5a-498c-868e-e7a35b93b448"). InnerVolumeSpecName "kube-api-access-zc9tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.466881 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-ceph" (OuterVolumeSpecName: "ceph") pod "9e54b6d7-5c5a-498c-868e-e7a35b93b448" (UID: "9e54b6d7-5c5a-498c-868e-e7a35b93b448"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.492514 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-inventory" (OuterVolumeSpecName: "inventory") pod "9e54b6d7-5c5a-498c-868e-e7a35b93b448" (UID: "9e54b6d7-5c5a-498c-868e-e7a35b93b448"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.507699 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9e54b6d7-5c5a-498c-868e-e7a35b93b448" (UID: "9e54b6d7-5c5a-498c-868e-e7a35b93b448"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.559513 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.559546 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc9tk\" (UniqueName: \"kubernetes.io/projected/9e54b6d7-5c5a-498c-868e-e7a35b93b448-kube-api-access-zc9tk\") on node \"crc\" DevicePath \"\"" Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.559558 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.559567 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e54b6d7-5c5a-498c-868e-e7a35b93b448-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.898473 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" event={"ID":"9e54b6d7-5c5a-498c-868e-e7a35b93b448","Type":"ContainerDied","Data":"14ee910c2d30768d5d77d241a106edc087e9a6ebda6db8aefce0515341862fe3"} Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.898552 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14ee910c2d30768d5d77d241a106edc087e9a6ebda6db8aefce0515341862fe3" Nov 25 20:10:33 crc kubenswrapper[4775]: I1125 20:10:33.898696 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.032922 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr"] Nov 25 20:10:34 crc kubenswrapper[4775]: E1125 20:10:34.033420 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e54b6d7-5c5a-498c-868e-e7a35b93b448" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.033443 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e54b6d7-5c5a-498c-868e-e7a35b93b448" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.033727 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e54b6d7-5c5a-498c-868e-e7a35b93b448" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.034457 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.037713 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.037742 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.037796 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.037995 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.039170 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.043160 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr"] Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.071187 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.071256 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.072390 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8lml\" (UniqueName: \"kubernetes.io/projected/db1b8608-e0b8-498f-94de-bff78ef4a19c-kube-api-access-g8lml\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.072540 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.174152 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.175462 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.175615 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8lml\" (UniqueName: \"kubernetes.io/projected/db1b8608-e0b8-498f-94de-bff78ef4a19c-kube-api-access-g8lml\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.175804 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.179943 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.181089 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.181371 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.194725 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8lml\" (UniqueName: \"kubernetes.io/projected/db1b8608-e0b8-498f-94de-bff78ef4a19c-kube-api-access-g8lml\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.391927 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:34 crc kubenswrapper[4775]: I1125 20:10:34.987836 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr"] Nov 25 20:10:35 crc kubenswrapper[4775]: I1125 20:10:35.923108 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" event={"ID":"db1b8608-e0b8-498f-94de-bff78ef4a19c","Type":"ContainerStarted","Data":"a5fff1ab00e80f7d6da7d31a7ce7c20db89986c70c21f1a36335eef2fa025087"} Nov 25 20:10:35 crc kubenswrapper[4775]: I1125 20:10:35.924001 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" event={"ID":"db1b8608-e0b8-498f-94de-bff78ef4a19c","Type":"ContainerStarted","Data":"613e764fc42c5d3858481a6eb74e34fb91ab4c9595de3a111ba08b70688048c4"} Nov 25 20:10:35 crc kubenswrapper[4775]: I1125 20:10:35.961059 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" podStartSLOduration=2.4685207829999998 podStartE2EDuration="2.961030572s" podCreationTimestamp="2025-11-25 20:10:33 +0000 UTC" firstStartedPulling="2025-11-25 20:10:34.997989799 +0000 UTC m=+2216.914352165" lastFinishedPulling="2025-11-25 20:10:35.490499548 +0000 UTC m=+2217.406861954" observedRunningTime="2025-11-25 20:10:35.946882678 +0000 UTC m=+2217.863245054" watchObservedRunningTime="2025-11-25 20:10:35.961030572 +0000 UTC m=+2217.877392968" Nov 25 20:10:41 crc kubenswrapper[4775]: I1125 20:10:41.070762 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:10:41 crc kubenswrapper[4775]: I1125 20:10:41.071424 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:10:41 crc kubenswrapper[4775]: I1125 20:10:41.997987 4775 generic.go:334] "Generic (PLEG): container finished" podID="db1b8608-e0b8-498f-94de-bff78ef4a19c" containerID="a5fff1ab00e80f7d6da7d31a7ce7c20db89986c70c21f1a36335eef2fa025087" exitCode=0 Nov 25 20:10:41 crc kubenswrapper[4775]: I1125 20:10:41.998027 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" event={"ID":"db1b8608-e0b8-498f-94de-bff78ef4a19c","Type":"ContainerDied","Data":"a5fff1ab00e80f7d6da7d31a7ce7c20db89986c70c21f1a36335eef2fa025087"} Nov 25 20:10:43 crc kubenswrapper[4775]: I1125 20:10:43.488247 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:43 crc kubenswrapper[4775]: I1125 20:10:43.571108 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8lml\" (UniqueName: \"kubernetes.io/projected/db1b8608-e0b8-498f-94de-bff78ef4a19c-kube-api-access-g8lml\") pod \"db1b8608-e0b8-498f-94de-bff78ef4a19c\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " Nov 25 20:10:43 crc kubenswrapper[4775]: I1125 20:10:43.571315 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-inventory\") pod \"db1b8608-e0b8-498f-94de-bff78ef4a19c\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " Nov 25 20:10:43 crc kubenswrapper[4775]: I1125 20:10:43.571515 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-ssh-key\") pod \"db1b8608-e0b8-498f-94de-bff78ef4a19c\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " Nov 25 20:10:43 crc kubenswrapper[4775]: I1125 20:10:43.571756 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-ceph\") pod \"db1b8608-e0b8-498f-94de-bff78ef4a19c\" (UID: \"db1b8608-e0b8-498f-94de-bff78ef4a19c\") " Nov 25 20:10:43 crc kubenswrapper[4775]: I1125 20:10:43.576500 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db1b8608-e0b8-498f-94de-bff78ef4a19c-kube-api-access-g8lml" (OuterVolumeSpecName: "kube-api-access-g8lml") pod "db1b8608-e0b8-498f-94de-bff78ef4a19c" (UID: "db1b8608-e0b8-498f-94de-bff78ef4a19c"). InnerVolumeSpecName "kube-api-access-g8lml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:10:43 crc kubenswrapper[4775]: I1125 20:10:43.592867 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-ceph" (OuterVolumeSpecName: "ceph") pod "db1b8608-e0b8-498f-94de-bff78ef4a19c" (UID: "db1b8608-e0b8-498f-94de-bff78ef4a19c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:10:43 crc kubenswrapper[4775]: I1125 20:10:43.604888 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-inventory" (OuterVolumeSpecName: "inventory") pod "db1b8608-e0b8-498f-94de-bff78ef4a19c" (UID: "db1b8608-e0b8-498f-94de-bff78ef4a19c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:10:43 crc kubenswrapper[4775]: I1125 20:10:43.608413 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "db1b8608-e0b8-498f-94de-bff78ef4a19c" (UID: "db1b8608-e0b8-498f-94de-bff78ef4a19c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:10:43 crc kubenswrapper[4775]: I1125 20:10:43.673212 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8lml\" (UniqueName: \"kubernetes.io/projected/db1b8608-e0b8-498f-94de-bff78ef4a19c-kube-api-access-g8lml\") on node \"crc\" DevicePath \"\"" Nov 25 20:10:43 crc kubenswrapper[4775]: I1125 20:10:43.673249 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:10:43 crc kubenswrapper[4775]: I1125 20:10:43.673262 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:10:43 crc kubenswrapper[4775]: I1125 20:10:43.673274 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/db1b8608-e0b8-498f-94de-bff78ef4a19c-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.018361 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" event={"ID":"db1b8608-e0b8-498f-94de-bff78ef4a19c","Type":"ContainerDied","Data":"613e764fc42c5d3858481a6eb74e34fb91ab4c9595de3a111ba08b70688048c4"} Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.018712 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="613e764fc42c5d3858481a6eb74e34fb91ab4c9595de3a111ba08b70688048c4" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.018499 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.103422 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6"] Nov 25 20:10:44 crc kubenswrapper[4775]: E1125 20:10:44.103909 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1b8608-e0b8-498f-94de-bff78ef4a19c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.103933 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1b8608-e0b8-498f-94de-bff78ef4a19c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.104142 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1b8608-e0b8-498f-94de-bff78ef4a19c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.104835 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.108259 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.108278 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.108397 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.115768 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.115798 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.145561 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6"] Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.183607 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qhz6\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.183673 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qhz6\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.183694 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qhz6\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.183758 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtb5p\" (UniqueName: \"kubernetes.io/projected/0b0b1001-6bd7-4db7-817b-dcb453399b78-kube-api-access-vtb5p\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qhz6\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.285780 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qhz6\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.285874 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qhz6\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.285924 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qhz6\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.286063 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtb5p\" (UniqueName: \"kubernetes.io/projected/0b0b1001-6bd7-4db7-817b-dcb453399b78-kube-api-access-vtb5p\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qhz6\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.290207 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qhz6\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.291108 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qhz6\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.301270 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qhz6\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.307784 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtb5p\" (UniqueName: \"kubernetes.io/projected/0b0b1001-6bd7-4db7-817b-dcb453399b78-kube-api-access-vtb5p\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qhz6\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.454287 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:10:44 crc kubenswrapper[4775]: W1125 20:10:44.952170 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b0b1001_6bd7_4db7_817b_dcb453399b78.slice/crio-1e30c361e9c4500cab271a87ed1c92c79658a3c7ac9c90d355ca5f1d7393a342 WatchSource:0}: Error finding container 1e30c361e9c4500cab271a87ed1c92c79658a3c7ac9c90d355ca5f1d7393a342: Status 404 returned error can't find the container with id 1e30c361e9c4500cab271a87ed1c92c79658a3c7ac9c90d355ca5f1d7393a342 Nov 25 20:10:44 crc kubenswrapper[4775]: I1125 20:10:44.953119 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6"] Nov 25 20:10:45 crc kubenswrapper[4775]: I1125 20:10:45.029388 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" event={"ID":"0b0b1001-6bd7-4db7-817b-dcb453399b78","Type":"ContainerStarted","Data":"1e30c361e9c4500cab271a87ed1c92c79658a3c7ac9c90d355ca5f1d7393a342"} Nov 25 20:10:46 crc kubenswrapper[4775]: I1125 20:10:46.039126 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" event={"ID":"0b0b1001-6bd7-4db7-817b-dcb453399b78","Type":"ContainerStarted","Data":"382844faaf0fee54a80b10a336d2ef6aad0d0f002ba029f59d110ae1a4271ae6"} Nov 25 20:10:46 crc kubenswrapper[4775]: I1125 20:10:46.062266 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" podStartSLOduration=1.423625671 podStartE2EDuration="2.06224315s" podCreationTimestamp="2025-11-25 20:10:44 +0000 UTC" firstStartedPulling="2025-11-25 20:10:44.955229324 +0000 UTC m=+2226.871591690" lastFinishedPulling="2025-11-25 20:10:45.593846803 +0000 UTC m=+2227.510209169" observedRunningTime="2025-11-25 20:10:46.05781469 +0000 UTC m=+2227.974177056" watchObservedRunningTime="2025-11-25 20:10:46.06224315 +0000 UTC m=+2227.978605546" Nov 25 20:10:51 crc kubenswrapper[4775]: I1125 20:10:51.378131 4775 scope.go:117] "RemoveContainer" containerID="a73328fa387252255ab35bcf877b8dc07c3fbe73e312e9d7351aea4f72df7621" Nov 25 20:11:11 crc kubenswrapper[4775]: I1125 20:11:11.070242 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:11:11 crc kubenswrapper[4775]: I1125 20:11:11.071082 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:11:11 crc kubenswrapper[4775]: I1125 20:11:11.071164 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 20:11:11 crc kubenswrapper[4775]: I1125 20:11:11.072402 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:11:11 crc kubenswrapper[4775]: I1125 20:11:11.072537 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" gracePeriod=600 Nov 25 20:11:11 crc kubenswrapper[4775]: E1125 20:11:11.195800 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:11:11 crc kubenswrapper[4775]: I1125 20:11:11.292905 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" exitCode=0 Nov 25 20:11:11 crc kubenswrapper[4775]: I1125 20:11:11.292953 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717"} Nov 25 20:11:11 crc kubenswrapper[4775]: I1125 20:11:11.292992 4775 scope.go:117] "RemoveContainer" containerID="43c10e56638b61bef408264f80dccc089018bf2c0426ecc5f9c21c1f71057a16" Nov 25 20:11:11 crc kubenswrapper[4775]: I1125 20:11:11.293817 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:11:11 crc kubenswrapper[4775]: E1125 20:11:11.294140 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:11:19 crc kubenswrapper[4775]: I1125 20:11:19.505915 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4mhqv"] Nov 25 20:11:19 crc kubenswrapper[4775]: I1125 20:11:19.508599 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:19 crc kubenswrapper[4775]: I1125 20:11:19.540692 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mhqv"] Nov 25 20:11:19 crc kubenswrapper[4775]: I1125 20:11:19.563246 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skhrm\" (UniqueName: \"kubernetes.io/projected/65bb486f-e7f0-4b80-b8bb-f46971b2fc53-kube-api-access-skhrm\") pod \"community-operators-4mhqv\" (UID: \"65bb486f-e7f0-4b80-b8bb-f46971b2fc53\") " pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:19 crc kubenswrapper[4775]: I1125 20:11:19.563355 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65bb486f-e7f0-4b80-b8bb-f46971b2fc53-utilities\") pod \"community-operators-4mhqv\" (UID: \"65bb486f-e7f0-4b80-b8bb-f46971b2fc53\") " pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:19 crc kubenswrapper[4775]: I1125 20:11:19.563417 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65bb486f-e7f0-4b80-b8bb-f46971b2fc53-catalog-content\") pod \"community-operators-4mhqv\" (UID: \"65bb486f-e7f0-4b80-b8bb-f46971b2fc53\") " pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:19 crc kubenswrapper[4775]: I1125 20:11:19.665488 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skhrm\" (UniqueName: \"kubernetes.io/projected/65bb486f-e7f0-4b80-b8bb-f46971b2fc53-kube-api-access-skhrm\") pod \"community-operators-4mhqv\" (UID: \"65bb486f-e7f0-4b80-b8bb-f46971b2fc53\") " pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:19 crc kubenswrapper[4775]: I1125 20:11:19.665689 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65bb486f-e7f0-4b80-b8bb-f46971b2fc53-utilities\") pod \"community-operators-4mhqv\" (UID: \"65bb486f-e7f0-4b80-b8bb-f46971b2fc53\") " pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:19 crc kubenswrapper[4775]: I1125 20:11:19.665802 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65bb486f-e7f0-4b80-b8bb-f46971b2fc53-catalog-content\") pod \"community-operators-4mhqv\" (UID: \"65bb486f-e7f0-4b80-b8bb-f46971b2fc53\") " pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:19 crc kubenswrapper[4775]: I1125 20:11:19.666450 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65bb486f-e7f0-4b80-b8bb-f46971b2fc53-catalog-content\") pod \"community-operators-4mhqv\" (UID: \"65bb486f-e7f0-4b80-b8bb-f46971b2fc53\") " pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:19 crc kubenswrapper[4775]: I1125 20:11:19.667278 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65bb486f-e7f0-4b80-b8bb-f46971b2fc53-utilities\") pod \"community-operators-4mhqv\" (UID: \"65bb486f-e7f0-4b80-b8bb-f46971b2fc53\") " pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:19 crc kubenswrapper[4775]: I1125 20:11:19.696713 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skhrm\" (UniqueName: \"kubernetes.io/projected/65bb486f-e7f0-4b80-b8bb-f46971b2fc53-kube-api-access-skhrm\") pod \"community-operators-4mhqv\" (UID: \"65bb486f-e7f0-4b80-b8bb-f46971b2fc53\") " pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:19 crc kubenswrapper[4775]: I1125 20:11:19.838370 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:20 crc kubenswrapper[4775]: I1125 20:11:20.395082 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mhqv"] Nov 25 20:11:20 crc kubenswrapper[4775]: I1125 20:11:20.481388 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mhqv" event={"ID":"65bb486f-e7f0-4b80-b8bb-f46971b2fc53","Type":"ContainerStarted","Data":"e6774030134e5d45697366ae354ab074a18cfa388f974ec0dedb7c6c77ea46cd"} Nov 25 20:11:21 crc kubenswrapper[4775]: I1125 20:11:21.496096 4775 generic.go:334] "Generic (PLEG): container finished" podID="65bb486f-e7f0-4b80-b8bb-f46971b2fc53" containerID="7bdecc1f5002f7352d7089eec273df02a95fb5b0970e67ade9a465c5cdcc4a5b" exitCode=0 Nov 25 20:11:21 crc kubenswrapper[4775]: I1125 20:11:21.496250 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mhqv" event={"ID":"65bb486f-e7f0-4b80-b8bb-f46971b2fc53","Type":"ContainerDied","Data":"7bdecc1f5002f7352d7089eec273df02a95fb5b0970e67ade9a465c5cdcc4a5b"} Nov 25 20:11:22 crc kubenswrapper[4775]: I1125 20:11:22.846832 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:11:22 crc kubenswrapper[4775]: E1125 20:11:22.847060 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:11:23 crc kubenswrapper[4775]: I1125 20:11:23.477987 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lvzsc"] Nov 25 20:11:23 crc kubenswrapper[4775]: I1125 20:11:23.480274 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:23 crc kubenswrapper[4775]: I1125 20:11:23.493261 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvzsc"] Nov 25 20:11:23 crc kubenswrapper[4775]: I1125 20:11:23.544516 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c29ee970-138b-49b0-8fd3-31326275573a-catalog-content\") pod \"redhat-marketplace-lvzsc\" (UID: \"c29ee970-138b-49b0-8fd3-31326275573a\") " pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:23 crc kubenswrapper[4775]: I1125 20:11:23.544567 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c29ee970-138b-49b0-8fd3-31326275573a-utilities\") pod \"redhat-marketplace-lvzsc\" (UID: \"c29ee970-138b-49b0-8fd3-31326275573a\") " pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:23 crc kubenswrapper[4775]: I1125 20:11:23.544611 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85q8g\" (UniqueName: \"kubernetes.io/projected/c29ee970-138b-49b0-8fd3-31326275573a-kube-api-access-85q8g\") pod \"redhat-marketplace-lvzsc\" (UID: \"c29ee970-138b-49b0-8fd3-31326275573a\") " pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:23 crc kubenswrapper[4775]: I1125 20:11:23.645992 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c29ee970-138b-49b0-8fd3-31326275573a-catalog-content\") pod \"redhat-marketplace-lvzsc\" (UID: \"c29ee970-138b-49b0-8fd3-31326275573a\") " pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:23 crc kubenswrapper[4775]: I1125 20:11:23.646070 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c29ee970-138b-49b0-8fd3-31326275573a-utilities\") pod \"redhat-marketplace-lvzsc\" (UID: \"c29ee970-138b-49b0-8fd3-31326275573a\") " pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:23 crc kubenswrapper[4775]: I1125 20:11:23.646137 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85q8g\" (UniqueName: \"kubernetes.io/projected/c29ee970-138b-49b0-8fd3-31326275573a-kube-api-access-85q8g\") pod \"redhat-marketplace-lvzsc\" (UID: \"c29ee970-138b-49b0-8fd3-31326275573a\") " pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:23 crc kubenswrapper[4775]: I1125 20:11:23.647192 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c29ee970-138b-49b0-8fd3-31326275573a-catalog-content\") pod \"redhat-marketplace-lvzsc\" (UID: \"c29ee970-138b-49b0-8fd3-31326275573a\") " pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:23 crc kubenswrapper[4775]: I1125 20:11:23.647224 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c29ee970-138b-49b0-8fd3-31326275573a-utilities\") pod \"redhat-marketplace-lvzsc\" (UID: \"c29ee970-138b-49b0-8fd3-31326275573a\") " pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:23 crc kubenswrapper[4775]: I1125 20:11:23.668933 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85q8g\" (UniqueName: \"kubernetes.io/projected/c29ee970-138b-49b0-8fd3-31326275573a-kube-api-access-85q8g\") pod \"redhat-marketplace-lvzsc\" (UID: \"c29ee970-138b-49b0-8fd3-31326275573a\") " pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:23 crc kubenswrapper[4775]: I1125 20:11:23.800744 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:27 crc kubenswrapper[4775]: I1125 20:11:27.083894 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvzsc"] Nov 25 20:11:27 crc kubenswrapper[4775]: I1125 20:11:27.551976 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvzsc" event={"ID":"c29ee970-138b-49b0-8fd3-31326275573a","Type":"ContainerStarted","Data":"8199cdaf81c67f0eaf28c134b02185f4dc2934277de98e3e67787b7cbcb61880"} Nov 25 20:11:27 crc kubenswrapper[4775]: I1125 20:11:27.552027 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvzsc" event={"ID":"c29ee970-138b-49b0-8fd3-31326275573a","Type":"ContainerStarted","Data":"46f98e6132544e2b2becc75202be69cbbda8eda0741261a2d52782eea7fba978"} Nov 25 20:11:27 crc kubenswrapper[4775]: I1125 20:11:27.557255 4775 generic.go:334] "Generic (PLEG): container finished" podID="65bb486f-e7f0-4b80-b8bb-f46971b2fc53" containerID="9c5d568af1f1327e120b55f49167e2a38777cf0a89b5a761040150bc1ae37f6c" exitCode=0 Nov 25 20:11:27 crc kubenswrapper[4775]: I1125 20:11:27.557302 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mhqv" event={"ID":"65bb486f-e7f0-4b80-b8bb-f46971b2fc53","Type":"ContainerDied","Data":"9c5d568af1f1327e120b55f49167e2a38777cf0a89b5a761040150bc1ae37f6c"} Nov 25 20:11:28 crc kubenswrapper[4775]: I1125 20:11:28.576343 4775 generic.go:334] "Generic (PLEG): container finished" podID="c29ee970-138b-49b0-8fd3-31326275573a" containerID="8199cdaf81c67f0eaf28c134b02185f4dc2934277de98e3e67787b7cbcb61880" exitCode=0 Nov 25 20:11:28 crc kubenswrapper[4775]: I1125 20:11:28.576552 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvzsc" event={"ID":"c29ee970-138b-49b0-8fd3-31326275573a","Type":"ContainerDied","Data":"8199cdaf81c67f0eaf28c134b02185f4dc2934277de98e3e67787b7cbcb61880"} Nov 25 20:11:28 crc kubenswrapper[4775]: I1125 20:11:28.583629 4775 generic.go:334] "Generic (PLEG): container finished" podID="0b0b1001-6bd7-4db7-817b-dcb453399b78" containerID="382844faaf0fee54a80b10a336d2ef6aad0d0f002ba029f59d110ae1a4271ae6" exitCode=0 Nov 25 20:11:28 crc kubenswrapper[4775]: I1125 20:11:28.583734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" event={"ID":"0b0b1001-6bd7-4db7-817b-dcb453399b78","Type":"ContainerDied","Data":"382844faaf0fee54a80b10a336d2ef6aad0d0f002ba029f59d110ae1a4271ae6"} Nov 25 20:11:29 crc kubenswrapper[4775]: I1125 20:11:29.594833 4775 generic.go:334] "Generic (PLEG): container finished" podID="c29ee970-138b-49b0-8fd3-31326275573a" containerID="2f656fa9486d3bec2b8afed606fb5f6f34d47b6db4fb1af6fd79ffcd8412fc8b" exitCode=0 Nov 25 20:11:29 crc kubenswrapper[4775]: I1125 20:11:29.594949 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvzsc" event={"ID":"c29ee970-138b-49b0-8fd3-31326275573a","Type":"ContainerDied","Data":"2f656fa9486d3bec2b8afed606fb5f6f34d47b6db4fb1af6fd79ffcd8412fc8b"} Nov 25 20:11:29 crc kubenswrapper[4775]: I1125 20:11:29.603830 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mhqv" event={"ID":"65bb486f-e7f0-4b80-b8bb-f46971b2fc53","Type":"ContainerStarted","Data":"ba2b43320d5932003547cd383ff94b0e7d7402230166ba45369c2c0115eb046f"} Nov 25 20:11:29 crc kubenswrapper[4775]: I1125 20:11:29.640943 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4mhqv" podStartSLOduration=3.532893485 podStartE2EDuration="10.640925231s" podCreationTimestamp="2025-11-25 20:11:19 +0000 UTC" firstStartedPulling="2025-11-25 20:11:21.498807146 +0000 UTC m=+2263.415169552" lastFinishedPulling="2025-11-25 20:11:28.606838912 +0000 UTC m=+2270.523201298" observedRunningTime="2025-11-25 20:11:29.638748303 +0000 UTC m=+2271.555110669" watchObservedRunningTime="2025-11-25 20:11:29.640925231 +0000 UTC m=+2271.557287607" Nov 25 20:11:29 crc kubenswrapper[4775]: I1125 20:11:29.838967 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:29 crc kubenswrapper[4775]: I1125 20:11:29.839023 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.145881 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.280167 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-inventory\") pod \"0b0b1001-6bd7-4db7-817b-dcb453399b78\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.280358 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-ceph\") pod \"0b0b1001-6bd7-4db7-817b-dcb453399b78\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.280407 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtb5p\" (UniqueName: \"kubernetes.io/projected/0b0b1001-6bd7-4db7-817b-dcb453399b78-kube-api-access-vtb5p\") pod \"0b0b1001-6bd7-4db7-817b-dcb453399b78\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.280427 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-ssh-key\") pod \"0b0b1001-6bd7-4db7-817b-dcb453399b78\" (UID: \"0b0b1001-6bd7-4db7-817b-dcb453399b78\") " Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.287893 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b0b1001-6bd7-4db7-817b-dcb453399b78-kube-api-access-vtb5p" (OuterVolumeSpecName: "kube-api-access-vtb5p") pod "0b0b1001-6bd7-4db7-817b-dcb453399b78" (UID: "0b0b1001-6bd7-4db7-817b-dcb453399b78"). InnerVolumeSpecName "kube-api-access-vtb5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.290074 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-ceph" (OuterVolumeSpecName: "ceph") pod "0b0b1001-6bd7-4db7-817b-dcb453399b78" (UID: "0b0b1001-6bd7-4db7-817b-dcb453399b78"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.313638 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-inventory" (OuterVolumeSpecName: "inventory") pod "0b0b1001-6bd7-4db7-817b-dcb453399b78" (UID: "0b0b1001-6bd7-4db7-817b-dcb453399b78"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.314800 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0b0b1001-6bd7-4db7-817b-dcb453399b78" (UID: "0b0b1001-6bd7-4db7-817b-dcb453399b78"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.383449 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.383743 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.383824 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtb5p\" (UniqueName: \"kubernetes.io/projected/0b0b1001-6bd7-4db7-817b-dcb453399b78-kube-api-access-vtb5p\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.383921 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b0b1001-6bd7-4db7-817b-dcb453399b78-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.614636 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.614628 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qhz6" event={"ID":"0b0b1001-6bd7-4db7-817b-dcb453399b78","Type":"ContainerDied","Data":"1e30c361e9c4500cab271a87ed1c92c79658a3c7ac9c90d355ca5f1d7393a342"} Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.615520 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e30c361e9c4500cab271a87ed1c92c79658a3c7ac9c90d355ca5f1d7393a342" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.617312 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvzsc" event={"ID":"c29ee970-138b-49b0-8fd3-31326275573a","Type":"ContainerStarted","Data":"f5d483c5d3bea546a53e88772cf36f68d013406e9c9ec808a06d229a83caa3ab"} Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.662274 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lvzsc" podStartSLOduration=6.232568423 podStartE2EDuration="7.662253225s" podCreationTimestamp="2025-11-25 20:11:23 +0000 UTC" firstStartedPulling="2025-11-25 20:11:28.591900417 +0000 UTC m=+2270.508262823" lastFinishedPulling="2025-11-25 20:11:30.021585259 +0000 UTC m=+2271.937947625" observedRunningTime="2025-11-25 20:11:30.64654117 +0000 UTC m=+2272.562903546" watchObservedRunningTime="2025-11-25 20:11:30.662253225 +0000 UTC m=+2272.578615581" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.717857 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh"] Nov 25 20:11:30 crc kubenswrapper[4775]: E1125 20:11:30.718199 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0b1001-6bd7-4db7-817b-dcb453399b78" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.718215 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0b1001-6bd7-4db7-817b-dcb453399b78" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.718391 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0b1001-6bd7-4db7-817b-dcb453399b78" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.719013 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.721233 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.721346 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.721667 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.722035 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.722956 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.736096 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh"] Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.789365 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.789505 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.789563 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.789587 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stcb4\" (UniqueName: \"kubernetes.io/projected/cafc818a-081e-48dd-ae98-001a1c00b074-kube-api-access-stcb4\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.891592 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.891724 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.891752 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stcb4\" (UniqueName: \"kubernetes.io/projected/cafc818a-081e-48dd-ae98-001a1c00b074-kube-api-access-stcb4\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.891910 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.897307 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.898047 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.902259 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.917400 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stcb4\" (UniqueName: \"kubernetes.io/projected/cafc818a-081e-48dd-ae98-001a1c00b074-kube-api-access-stcb4\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:30 crc kubenswrapper[4775]: I1125 20:11:30.918797 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4mhqv" podUID="65bb486f-e7f0-4b80-b8bb-f46971b2fc53" containerName="registry-server" probeResult="failure" output=< Nov 25 20:11:30 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Nov 25 20:11:30 crc kubenswrapper[4775]: > Nov 25 20:11:31 crc kubenswrapper[4775]: I1125 20:11:31.042892 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:31 crc kubenswrapper[4775]: I1125 20:11:31.597006 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh"] Nov 25 20:11:31 crc kubenswrapper[4775]: I1125 20:11:31.634353 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" event={"ID":"cafc818a-081e-48dd-ae98-001a1c00b074","Type":"ContainerStarted","Data":"5e30110c60ec11af6b6cbebb5e19d108d90ea95f21ad61403772d1dd652b18da"} Nov 25 20:11:32 crc kubenswrapper[4775]: I1125 20:11:32.645483 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" event={"ID":"cafc818a-081e-48dd-ae98-001a1c00b074","Type":"ContainerStarted","Data":"561dfb4dfdba6a00157036a332dda718392ca97649647d34c30e02640741f9eb"} Nov 25 20:11:33 crc kubenswrapper[4775]: I1125 20:11:33.802236 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:33 crc kubenswrapper[4775]: I1125 20:11:33.802571 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:33 crc kubenswrapper[4775]: I1125 20:11:33.869726 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:33 crc kubenswrapper[4775]: I1125 20:11:33.903605 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" podStartSLOduration=3.480999839 podStartE2EDuration="3.903576973s" podCreationTimestamp="2025-11-25 20:11:30 +0000 UTC" firstStartedPulling="2025-11-25 20:11:31.598551644 +0000 UTC m=+2273.514914020" lastFinishedPulling="2025-11-25 20:11:32.021128748 +0000 UTC m=+2273.937491154" observedRunningTime="2025-11-25 20:11:32.668176927 +0000 UTC m=+2274.584539323" watchObservedRunningTime="2025-11-25 20:11:33.903576973 +0000 UTC m=+2275.819939369" Nov 25 20:11:34 crc kubenswrapper[4775]: I1125 20:11:34.847567 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:11:34 crc kubenswrapper[4775]: E1125 20:11:34.848017 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:11:36 crc kubenswrapper[4775]: I1125 20:11:36.688968 4775 generic.go:334] "Generic (PLEG): container finished" podID="cafc818a-081e-48dd-ae98-001a1c00b074" containerID="561dfb4dfdba6a00157036a332dda718392ca97649647d34c30e02640741f9eb" exitCode=0 Nov 25 20:11:36 crc kubenswrapper[4775]: I1125 20:11:36.689046 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" event={"ID":"cafc818a-081e-48dd-ae98-001a1c00b074","Type":"ContainerDied","Data":"561dfb4dfdba6a00157036a332dda718392ca97649647d34c30e02640741f9eb"} Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.124171 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.248088 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stcb4\" (UniqueName: \"kubernetes.io/projected/cafc818a-081e-48dd-ae98-001a1c00b074-kube-api-access-stcb4\") pod \"cafc818a-081e-48dd-ae98-001a1c00b074\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.248372 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-ssh-key\") pod \"cafc818a-081e-48dd-ae98-001a1c00b074\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.248414 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-ceph\") pod \"cafc818a-081e-48dd-ae98-001a1c00b074\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.248526 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-inventory\") pod \"cafc818a-081e-48dd-ae98-001a1c00b074\" (UID: \"cafc818a-081e-48dd-ae98-001a1c00b074\") " Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.255080 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cafc818a-081e-48dd-ae98-001a1c00b074-kube-api-access-stcb4" (OuterVolumeSpecName: "kube-api-access-stcb4") pod "cafc818a-081e-48dd-ae98-001a1c00b074" (UID: "cafc818a-081e-48dd-ae98-001a1c00b074"). InnerVolumeSpecName "kube-api-access-stcb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.257804 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-ceph" (OuterVolumeSpecName: "ceph") pod "cafc818a-081e-48dd-ae98-001a1c00b074" (UID: "cafc818a-081e-48dd-ae98-001a1c00b074"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.289724 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-inventory" (OuterVolumeSpecName: "inventory") pod "cafc818a-081e-48dd-ae98-001a1c00b074" (UID: "cafc818a-081e-48dd-ae98-001a1c00b074"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.293328 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cafc818a-081e-48dd-ae98-001a1c00b074" (UID: "cafc818a-081e-48dd-ae98-001a1c00b074"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.350453 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stcb4\" (UniqueName: \"kubernetes.io/projected/cafc818a-081e-48dd-ae98-001a1c00b074-kube-api-access-stcb4\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.350487 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.350502 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.350513 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafc818a-081e-48dd-ae98-001a1c00b074-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.710702 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" event={"ID":"cafc818a-081e-48dd-ae98-001a1c00b074","Type":"ContainerDied","Data":"5e30110c60ec11af6b6cbebb5e19d108d90ea95f21ad61403772d1dd652b18da"} Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.711189 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e30110c60ec11af6b6cbebb5e19d108d90ea95f21ad61403772d1dd652b18da" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.710873 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.826197 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv"] Nov 25 20:11:38 crc kubenswrapper[4775]: E1125 20:11:38.828059 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafc818a-081e-48dd-ae98-001a1c00b074" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.828087 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafc818a-081e-48dd-ae98-001a1c00b074" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.828310 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafc818a-081e-48dd-ae98-001a1c00b074" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.829561 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.836422 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.836584 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.836619 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.836716 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.836802 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.839842 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv"] Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.965061 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql9bp\" (UniqueName: \"kubernetes.io/projected/ed47c3bd-5136-4d5a-946b-924498853472-kube-api-access-ql9bp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.965150 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.965395 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:38 crc kubenswrapper[4775]: I1125 20:11:38.965736 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:39 crc kubenswrapper[4775]: I1125 20:11:39.071024 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:39 crc kubenswrapper[4775]: I1125 20:11:39.071567 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql9bp\" (UniqueName: \"kubernetes.io/projected/ed47c3bd-5136-4d5a-946b-924498853472-kube-api-access-ql9bp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:39 crc kubenswrapper[4775]: I1125 20:11:39.071626 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:39 crc kubenswrapper[4775]: I1125 20:11:39.071779 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:39 crc kubenswrapper[4775]: I1125 20:11:39.090035 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:39 crc kubenswrapper[4775]: I1125 20:11:39.092475 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:39 crc kubenswrapper[4775]: I1125 20:11:39.097199 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql9bp\" (UniqueName: \"kubernetes.io/projected/ed47c3bd-5136-4d5a-946b-924498853472-kube-api-access-ql9bp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:39 crc kubenswrapper[4775]: I1125 20:11:39.140977 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:39 crc kubenswrapper[4775]: I1125 20:11:39.164732 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:11:39 crc kubenswrapper[4775]: I1125 20:11:39.718076 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv"] Nov 25 20:11:39 crc kubenswrapper[4775]: I1125 20:11:39.909974 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:39 crc kubenswrapper[4775]: I1125 20:11:39.977268 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4mhqv" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.080191 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mhqv"] Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.167571 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k2fd4"] Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.167835 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k2fd4" podUID="1a73c6c1-fdc3-44b3-9b26-13821b4a7619" containerName="registry-server" containerID="cri-o://05bf8de2aa67ffdd7cfaa742d08f343b48a3474ecf2d59212ba0fb0421fc07fd" gracePeriod=2 Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.620575 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2fd4" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.708490 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqpdq\" (UniqueName: \"kubernetes.io/projected/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-kube-api-access-kqpdq\") pod \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\" (UID: \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\") " Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.708556 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-catalog-content\") pod \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\" (UID: \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\") " Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.708729 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-utilities\") pod \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\" (UID: \"1a73c6c1-fdc3-44b3-9b26-13821b4a7619\") " Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.709402 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-utilities" (OuterVolumeSpecName: "utilities") pod "1a73c6c1-fdc3-44b3-9b26-13821b4a7619" (UID: "1a73c6c1-fdc3-44b3-9b26-13821b4a7619"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.712831 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-kube-api-access-kqpdq" (OuterVolumeSpecName: "kube-api-access-kqpdq") pod "1a73c6c1-fdc3-44b3-9b26-13821b4a7619" (UID: "1a73c6c1-fdc3-44b3-9b26-13821b4a7619"). InnerVolumeSpecName "kube-api-access-kqpdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.731962 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" event={"ID":"ed47c3bd-5136-4d5a-946b-924498853472","Type":"ContainerStarted","Data":"8f33641947f42af62be3ce1f9e4098cd55ed1b1466b5551f134c01934230ba8e"} Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.732005 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" event={"ID":"ed47c3bd-5136-4d5a-946b-924498853472","Type":"ContainerStarted","Data":"a7c5d5dbf2ccd17ada1b94bca2e90645f5cddd816eaddead82ed6033867872b2"} Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.733713 4775 generic.go:334] "Generic (PLEG): container finished" podID="1a73c6c1-fdc3-44b3-9b26-13821b4a7619" containerID="05bf8de2aa67ffdd7cfaa742d08f343b48a3474ecf2d59212ba0fb0421fc07fd" exitCode=0 Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.734547 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2fd4" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.734712 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2fd4" event={"ID":"1a73c6c1-fdc3-44b3-9b26-13821b4a7619","Type":"ContainerDied","Data":"05bf8de2aa67ffdd7cfaa742d08f343b48a3474ecf2d59212ba0fb0421fc07fd"} Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.734738 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2fd4" event={"ID":"1a73c6c1-fdc3-44b3-9b26-13821b4a7619","Type":"ContainerDied","Data":"1fbeae41095cb24daa97d7c2f86203760ef161d344d3e280dbe060bd325a1a6e"} Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.734755 4775 scope.go:117] "RemoveContainer" containerID="05bf8de2aa67ffdd7cfaa742d08f343b48a3474ecf2d59212ba0fb0421fc07fd" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.757318 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" podStartSLOduration=2.297735839 podStartE2EDuration="2.757297166s" podCreationTimestamp="2025-11-25 20:11:38 +0000 UTC" firstStartedPulling="2025-11-25 20:11:39.725070697 +0000 UTC m=+2281.641433073" lastFinishedPulling="2025-11-25 20:11:40.184632034 +0000 UTC m=+2282.100994400" observedRunningTime="2025-11-25 20:11:40.74450547 +0000 UTC m=+2282.660867846" watchObservedRunningTime="2025-11-25 20:11:40.757297166 +0000 UTC m=+2282.673659532" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.763490 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a73c6c1-fdc3-44b3-9b26-13821b4a7619" (UID: "1a73c6c1-fdc3-44b3-9b26-13821b4a7619"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.764685 4775 scope.go:117] "RemoveContainer" containerID="ff2cb810cae92dbddc55a51e158efc8f6f5544266256e1ee6b02ccda0aecaa55" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.787154 4775 scope.go:117] "RemoveContainer" containerID="64f77387db3e59dbd567262a0ab663a225517a7b8c11b68b2d851c9542b4c56d" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.808026 4775 scope.go:117] "RemoveContainer" containerID="05bf8de2aa67ffdd7cfaa742d08f343b48a3474ecf2d59212ba0fb0421fc07fd" Nov 25 20:11:40 crc kubenswrapper[4775]: E1125 20:11:40.808318 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05bf8de2aa67ffdd7cfaa742d08f343b48a3474ecf2d59212ba0fb0421fc07fd\": container with ID starting with 05bf8de2aa67ffdd7cfaa742d08f343b48a3474ecf2d59212ba0fb0421fc07fd not found: ID does not exist" containerID="05bf8de2aa67ffdd7cfaa742d08f343b48a3474ecf2d59212ba0fb0421fc07fd" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.808351 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05bf8de2aa67ffdd7cfaa742d08f343b48a3474ecf2d59212ba0fb0421fc07fd"} err="failed to get container status \"05bf8de2aa67ffdd7cfaa742d08f343b48a3474ecf2d59212ba0fb0421fc07fd\": rpc error: code = NotFound desc = could not find container \"05bf8de2aa67ffdd7cfaa742d08f343b48a3474ecf2d59212ba0fb0421fc07fd\": container with ID starting with 05bf8de2aa67ffdd7cfaa742d08f343b48a3474ecf2d59212ba0fb0421fc07fd not found: ID does not exist" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.808373 4775 scope.go:117] "RemoveContainer" containerID="ff2cb810cae92dbddc55a51e158efc8f6f5544266256e1ee6b02ccda0aecaa55" Nov 25 20:11:40 crc kubenswrapper[4775]: E1125 20:11:40.808664 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff2cb810cae92dbddc55a51e158efc8f6f5544266256e1ee6b02ccda0aecaa55\": container with ID starting with ff2cb810cae92dbddc55a51e158efc8f6f5544266256e1ee6b02ccda0aecaa55 not found: ID does not exist" containerID="ff2cb810cae92dbddc55a51e158efc8f6f5544266256e1ee6b02ccda0aecaa55" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.808684 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2cb810cae92dbddc55a51e158efc8f6f5544266256e1ee6b02ccda0aecaa55"} err="failed to get container status \"ff2cb810cae92dbddc55a51e158efc8f6f5544266256e1ee6b02ccda0aecaa55\": rpc error: code = NotFound desc = could not find container \"ff2cb810cae92dbddc55a51e158efc8f6f5544266256e1ee6b02ccda0aecaa55\": container with ID starting with ff2cb810cae92dbddc55a51e158efc8f6f5544266256e1ee6b02ccda0aecaa55 not found: ID does not exist" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.808698 4775 scope.go:117] "RemoveContainer" containerID="64f77387db3e59dbd567262a0ab663a225517a7b8c11b68b2d851c9542b4c56d" Nov 25 20:11:40 crc kubenswrapper[4775]: E1125 20:11:40.808970 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64f77387db3e59dbd567262a0ab663a225517a7b8c11b68b2d851c9542b4c56d\": container with ID starting with 64f77387db3e59dbd567262a0ab663a225517a7b8c11b68b2d851c9542b4c56d not found: ID does not exist" containerID="64f77387db3e59dbd567262a0ab663a225517a7b8c11b68b2d851c9542b4c56d" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.808991 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f77387db3e59dbd567262a0ab663a225517a7b8c11b68b2d851c9542b4c56d"} err="failed to get container status \"64f77387db3e59dbd567262a0ab663a225517a7b8c11b68b2d851c9542b4c56d\": rpc error: code = NotFound desc = could not find container \"64f77387db3e59dbd567262a0ab663a225517a7b8c11b68b2d851c9542b4c56d\": container with ID starting with 64f77387db3e59dbd567262a0ab663a225517a7b8c11b68b2d851c9542b4c56d not found: ID does not exist" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.811243 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqpdq\" (UniqueName: \"kubernetes.io/projected/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-kube-api-access-kqpdq\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.811273 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:40 crc kubenswrapper[4775]: I1125 20:11:40.811860 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a73c6c1-fdc3-44b3-9b26-13821b4a7619-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:41 crc kubenswrapper[4775]: I1125 20:11:41.117740 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k2fd4"] Nov 25 20:11:41 crc kubenswrapper[4775]: I1125 20:11:41.141111 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k2fd4"] Nov 25 20:11:42 crc kubenswrapper[4775]: I1125 20:11:42.864212 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a73c6c1-fdc3-44b3-9b26-13821b4a7619" path="/var/lib/kubelet/pods/1a73c6c1-fdc3-44b3-9b26-13821b4a7619/volumes" Nov 25 20:11:43 crc kubenswrapper[4775]: I1125 20:11:43.852123 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:45 crc kubenswrapper[4775]: I1125 20:11:45.361700 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvzsc"] Nov 25 20:11:45 crc kubenswrapper[4775]: I1125 20:11:45.362199 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lvzsc" podUID="c29ee970-138b-49b0-8fd3-31326275573a" containerName="registry-server" containerID="cri-o://f5d483c5d3bea546a53e88772cf36f68d013406e9c9ec808a06d229a83caa3ab" gracePeriod=2 Nov 25 20:11:45 crc kubenswrapper[4775]: I1125 20:11:45.781410 4775 generic.go:334] "Generic (PLEG): container finished" podID="c29ee970-138b-49b0-8fd3-31326275573a" containerID="f5d483c5d3bea546a53e88772cf36f68d013406e9c9ec808a06d229a83caa3ab" exitCode=0 Nov 25 20:11:45 crc kubenswrapper[4775]: I1125 20:11:45.781440 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvzsc" event={"ID":"c29ee970-138b-49b0-8fd3-31326275573a","Type":"ContainerDied","Data":"f5d483c5d3bea546a53e88772cf36f68d013406e9c9ec808a06d229a83caa3ab"} Nov 25 20:11:45 crc kubenswrapper[4775]: I1125 20:11:45.851722 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.010544 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85q8g\" (UniqueName: \"kubernetes.io/projected/c29ee970-138b-49b0-8fd3-31326275573a-kube-api-access-85q8g\") pod \"c29ee970-138b-49b0-8fd3-31326275573a\" (UID: \"c29ee970-138b-49b0-8fd3-31326275573a\") " Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.010776 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c29ee970-138b-49b0-8fd3-31326275573a-catalog-content\") pod \"c29ee970-138b-49b0-8fd3-31326275573a\" (UID: \"c29ee970-138b-49b0-8fd3-31326275573a\") " Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.010827 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c29ee970-138b-49b0-8fd3-31326275573a-utilities\") pod \"c29ee970-138b-49b0-8fd3-31326275573a\" (UID: \"c29ee970-138b-49b0-8fd3-31326275573a\") " Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.011501 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c29ee970-138b-49b0-8fd3-31326275573a-utilities" (OuterVolumeSpecName: "utilities") pod "c29ee970-138b-49b0-8fd3-31326275573a" (UID: "c29ee970-138b-49b0-8fd3-31326275573a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.016380 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29ee970-138b-49b0-8fd3-31326275573a-kube-api-access-85q8g" (OuterVolumeSpecName: "kube-api-access-85q8g") pod "c29ee970-138b-49b0-8fd3-31326275573a" (UID: "c29ee970-138b-49b0-8fd3-31326275573a"). InnerVolumeSpecName "kube-api-access-85q8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.028566 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c29ee970-138b-49b0-8fd3-31326275573a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c29ee970-138b-49b0-8fd3-31326275573a" (UID: "c29ee970-138b-49b0-8fd3-31326275573a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.112733 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c29ee970-138b-49b0-8fd3-31326275573a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.112778 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c29ee970-138b-49b0-8fd3-31326275573a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.112789 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85q8g\" (UniqueName: \"kubernetes.io/projected/c29ee970-138b-49b0-8fd3-31326275573a-kube-api-access-85q8g\") on node \"crc\" DevicePath \"\"" Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.791338 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvzsc" event={"ID":"c29ee970-138b-49b0-8fd3-31326275573a","Type":"ContainerDied","Data":"46f98e6132544e2b2becc75202be69cbbda8eda0741261a2d52782eea7fba978"} Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.791392 4775 scope.go:117] "RemoveContainer" containerID="f5d483c5d3bea546a53e88772cf36f68d013406e9c9ec808a06d229a83caa3ab" Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.791478 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvzsc" Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.819370 4775 scope.go:117] "RemoveContainer" containerID="2f656fa9486d3bec2b8afed606fb5f6f34d47b6db4fb1af6fd79ffcd8412fc8b" Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.839736 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvzsc"] Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.844739 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvzsc"] Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.869969 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29ee970-138b-49b0-8fd3-31326275573a" path="/var/lib/kubelet/pods/c29ee970-138b-49b0-8fd3-31326275573a/volumes" Nov 25 20:11:46 crc kubenswrapper[4775]: I1125 20:11:46.875324 4775 scope.go:117] "RemoveContainer" containerID="8199cdaf81c67f0eaf28c134b02185f4dc2934277de98e3e67787b7cbcb61880" Nov 25 20:11:48 crc kubenswrapper[4775]: I1125 20:11:48.853413 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:11:48 crc kubenswrapper[4775]: E1125 20:11:48.853733 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:11:48 crc kubenswrapper[4775]: I1125 20:11:48.970774 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nmcnf"] Nov 25 20:11:48 crc kubenswrapper[4775]: E1125 20:11:48.971229 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29ee970-138b-49b0-8fd3-31326275573a" containerName="extract-utilities" Nov 25 20:11:48 crc kubenswrapper[4775]: I1125 20:11:48.971246 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29ee970-138b-49b0-8fd3-31326275573a" containerName="extract-utilities" Nov 25 20:11:48 crc kubenswrapper[4775]: E1125 20:11:48.971261 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29ee970-138b-49b0-8fd3-31326275573a" containerName="extract-content" Nov 25 20:11:48 crc kubenswrapper[4775]: I1125 20:11:48.971269 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29ee970-138b-49b0-8fd3-31326275573a" containerName="extract-content" Nov 25 20:11:48 crc kubenswrapper[4775]: E1125 20:11:48.971290 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29ee970-138b-49b0-8fd3-31326275573a" containerName="registry-server" Nov 25 20:11:48 crc kubenswrapper[4775]: I1125 20:11:48.971298 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29ee970-138b-49b0-8fd3-31326275573a" containerName="registry-server" Nov 25 20:11:48 crc kubenswrapper[4775]: E1125 20:11:48.971316 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a73c6c1-fdc3-44b3-9b26-13821b4a7619" containerName="extract-utilities" Nov 25 20:11:48 crc kubenswrapper[4775]: I1125 20:11:48.971324 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a73c6c1-fdc3-44b3-9b26-13821b4a7619" containerName="extract-utilities" Nov 25 20:11:48 crc kubenswrapper[4775]: E1125 20:11:48.971340 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a73c6c1-fdc3-44b3-9b26-13821b4a7619" containerName="registry-server" Nov 25 20:11:48 crc kubenswrapper[4775]: I1125 20:11:48.971347 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a73c6c1-fdc3-44b3-9b26-13821b4a7619" containerName="registry-server" Nov 25 20:11:48 crc kubenswrapper[4775]: E1125 20:11:48.971363 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a73c6c1-fdc3-44b3-9b26-13821b4a7619" containerName="extract-content" Nov 25 20:11:48 crc kubenswrapper[4775]: I1125 20:11:48.971369 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a73c6c1-fdc3-44b3-9b26-13821b4a7619" containerName="extract-content" Nov 25 20:11:48 crc kubenswrapper[4775]: I1125 20:11:48.971568 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29ee970-138b-49b0-8fd3-31326275573a" containerName="registry-server" Nov 25 20:11:48 crc kubenswrapper[4775]: I1125 20:11:48.971580 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a73c6c1-fdc3-44b3-9b26-13821b4a7619" containerName="registry-server" Nov 25 20:11:48 crc kubenswrapper[4775]: I1125 20:11:48.973138 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:48 crc kubenswrapper[4775]: I1125 20:11:48.991562 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmcnf"] Nov 25 20:11:49 crc kubenswrapper[4775]: I1125 20:11:49.173026 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x27cb\" (UniqueName: \"kubernetes.io/projected/921b3dcd-438e-4305-9041-5001548a0a10-kube-api-access-x27cb\") pod \"certified-operators-nmcnf\" (UID: \"921b3dcd-438e-4305-9041-5001548a0a10\") " pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:49 crc kubenswrapper[4775]: I1125 20:11:49.173378 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921b3dcd-438e-4305-9041-5001548a0a10-utilities\") pod \"certified-operators-nmcnf\" (UID: \"921b3dcd-438e-4305-9041-5001548a0a10\") " pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:49 crc kubenswrapper[4775]: I1125 20:11:49.173859 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921b3dcd-438e-4305-9041-5001548a0a10-catalog-content\") pod \"certified-operators-nmcnf\" (UID: \"921b3dcd-438e-4305-9041-5001548a0a10\") " pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:49 crc kubenswrapper[4775]: I1125 20:11:49.275338 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921b3dcd-438e-4305-9041-5001548a0a10-catalog-content\") pod \"certified-operators-nmcnf\" (UID: \"921b3dcd-438e-4305-9041-5001548a0a10\") " pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:49 crc kubenswrapper[4775]: I1125 20:11:49.275508 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x27cb\" (UniqueName: \"kubernetes.io/projected/921b3dcd-438e-4305-9041-5001548a0a10-kube-api-access-x27cb\") pod \"certified-operators-nmcnf\" (UID: \"921b3dcd-438e-4305-9041-5001548a0a10\") " pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:49 crc kubenswrapper[4775]: I1125 20:11:49.275543 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921b3dcd-438e-4305-9041-5001548a0a10-utilities\") pod \"certified-operators-nmcnf\" (UID: \"921b3dcd-438e-4305-9041-5001548a0a10\") " pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:49 crc kubenswrapper[4775]: I1125 20:11:49.275915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921b3dcd-438e-4305-9041-5001548a0a10-catalog-content\") pod \"certified-operators-nmcnf\" (UID: \"921b3dcd-438e-4305-9041-5001548a0a10\") " pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:49 crc kubenswrapper[4775]: I1125 20:11:49.276053 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921b3dcd-438e-4305-9041-5001548a0a10-utilities\") pod \"certified-operators-nmcnf\" (UID: \"921b3dcd-438e-4305-9041-5001548a0a10\") " pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:49 crc kubenswrapper[4775]: I1125 20:11:49.302510 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x27cb\" (UniqueName: \"kubernetes.io/projected/921b3dcd-438e-4305-9041-5001548a0a10-kube-api-access-x27cb\") pod \"certified-operators-nmcnf\" (UID: \"921b3dcd-438e-4305-9041-5001548a0a10\") " pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:49 crc kubenswrapper[4775]: I1125 20:11:49.353632 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:49 crc kubenswrapper[4775]: I1125 20:11:49.634005 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmcnf"] Nov 25 20:11:49 crc kubenswrapper[4775]: I1125 20:11:49.822157 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcnf" event={"ID":"921b3dcd-438e-4305-9041-5001548a0a10","Type":"ContainerStarted","Data":"cca67875929007a61ca8177e32b8b6baf16726ac012f1dac53e1c526c9c2b7e5"} Nov 25 20:11:50 crc kubenswrapper[4775]: I1125 20:11:50.837921 4775 generic.go:334] "Generic (PLEG): container finished" podID="921b3dcd-438e-4305-9041-5001548a0a10" containerID="a42aaebe03cca72d6aea2d379d484e46c72c6263b031d5b73a7e904507593bf0" exitCode=0 Nov 25 20:11:50 crc kubenswrapper[4775]: I1125 20:11:50.837992 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcnf" event={"ID":"921b3dcd-438e-4305-9041-5001548a0a10","Type":"ContainerDied","Data":"a42aaebe03cca72d6aea2d379d484e46c72c6263b031d5b73a7e904507593bf0"} Nov 25 20:11:51 crc kubenswrapper[4775]: I1125 20:11:51.862702 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcnf" event={"ID":"921b3dcd-438e-4305-9041-5001548a0a10","Type":"ContainerStarted","Data":"679dc2ce039bbdcd16a37eed9236aa8204cd44e8ae6516a0030fe54ca89a6011"} Nov 25 20:11:52 crc kubenswrapper[4775]: I1125 20:11:52.877204 4775 generic.go:334] "Generic (PLEG): container finished" podID="921b3dcd-438e-4305-9041-5001548a0a10" containerID="679dc2ce039bbdcd16a37eed9236aa8204cd44e8ae6516a0030fe54ca89a6011" exitCode=0 Nov 25 20:11:52 crc kubenswrapper[4775]: I1125 20:11:52.877277 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcnf" event={"ID":"921b3dcd-438e-4305-9041-5001548a0a10","Type":"ContainerDied","Data":"679dc2ce039bbdcd16a37eed9236aa8204cd44e8ae6516a0030fe54ca89a6011"} Nov 25 20:11:53 crc kubenswrapper[4775]: I1125 20:11:53.890584 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcnf" event={"ID":"921b3dcd-438e-4305-9041-5001548a0a10","Type":"ContainerStarted","Data":"e37951d6744a00c7ae1defb38baf14a0dafad0b54481c83652ccaa048a9cfe41"} Nov 25 20:11:53 crc kubenswrapper[4775]: I1125 20:11:53.923442 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nmcnf" podStartSLOduration=3.445587137 podStartE2EDuration="5.923408519s" podCreationTimestamp="2025-11-25 20:11:48 +0000 UTC" firstStartedPulling="2025-11-25 20:11:50.840065244 +0000 UTC m=+2292.756427650" lastFinishedPulling="2025-11-25 20:11:53.317886636 +0000 UTC m=+2295.234249032" observedRunningTime="2025-11-25 20:11:53.91052848 +0000 UTC m=+2295.826890846" watchObservedRunningTime="2025-11-25 20:11:53.923408519 +0000 UTC m=+2295.839770925" Nov 25 20:11:59 crc kubenswrapper[4775]: I1125 20:11:59.354762 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:59 crc kubenswrapper[4775]: I1125 20:11:59.355230 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:59 crc kubenswrapper[4775]: I1125 20:11:59.415464 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:11:59 crc kubenswrapper[4775]: I1125 20:11:59.999882 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:12:00 crc kubenswrapper[4775]: I1125 20:12:00.847183 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:12:00 crc kubenswrapper[4775]: E1125 20:12:00.847612 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:12:02 crc kubenswrapper[4775]: I1125 20:12:02.364745 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmcnf"] Nov 25 20:12:02 crc kubenswrapper[4775]: I1125 20:12:02.365209 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nmcnf" podUID="921b3dcd-438e-4305-9041-5001548a0a10" containerName="registry-server" containerID="cri-o://e37951d6744a00c7ae1defb38baf14a0dafad0b54481c83652ccaa048a9cfe41" gracePeriod=2 Nov 25 20:12:02 crc kubenswrapper[4775]: I1125 20:12:02.909019 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:12:02 crc kubenswrapper[4775]: I1125 20:12:02.984614 4775 generic.go:334] "Generic (PLEG): container finished" podID="921b3dcd-438e-4305-9041-5001548a0a10" containerID="e37951d6744a00c7ae1defb38baf14a0dafad0b54481c83652ccaa048a9cfe41" exitCode=0 Nov 25 20:12:02 crc kubenswrapper[4775]: I1125 20:12:02.984710 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcnf" event={"ID":"921b3dcd-438e-4305-9041-5001548a0a10","Type":"ContainerDied","Data":"e37951d6744a00c7ae1defb38baf14a0dafad0b54481c83652ccaa048a9cfe41"} Nov 25 20:12:02 crc kubenswrapper[4775]: I1125 20:12:02.984743 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcnf" event={"ID":"921b3dcd-438e-4305-9041-5001548a0a10","Type":"ContainerDied","Data":"cca67875929007a61ca8177e32b8b6baf16726ac012f1dac53e1c526c9c2b7e5"} Nov 25 20:12:02 crc kubenswrapper[4775]: I1125 20:12:02.984763 4775 scope.go:117] "RemoveContainer" containerID="e37951d6744a00c7ae1defb38baf14a0dafad0b54481c83652ccaa048a9cfe41" Nov 25 20:12:02 crc kubenswrapper[4775]: I1125 20:12:02.984942 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmcnf" Nov 25 20:12:02 crc kubenswrapper[4775]: I1125 20:12:02.992041 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921b3dcd-438e-4305-9041-5001548a0a10-utilities\") pod \"921b3dcd-438e-4305-9041-5001548a0a10\" (UID: \"921b3dcd-438e-4305-9041-5001548a0a10\") " Nov 25 20:12:02 crc kubenswrapper[4775]: I1125 20:12:02.992105 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x27cb\" (UniqueName: \"kubernetes.io/projected/921b3dcd-438e-4305-9041-5001548a0a10-kube-api-access-x27cb\") pod \"921b3dcd-438e-4305-9041-5001548a0a10\" (UID: \"921b3dcd-438e-4305-9041-5001548a0a10\") " Nov 25 20:12:02 crc kubenswrapper[4775]: I1125 20:12:02.992270 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921b3dcd-438e-4305-9041-5001548a0a10-catalog-content\") pod \"921b3dcd-438e-4305-9041-5001548a0a10\" (UID: \"921b3dcd-438e-4305-9041-5001548a0a10\") " Nov 25 20:12:02 crc kubenswrapper[4775]: I1125 20:12:02.993008 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921b3dcd-438e-4305-9041-5001548a0a10-utilities" (OuterVolumeSpecName: "utilities") pod "921b3dcd-438e-4305-9041-5001548a0a10" (UID: "921b3dcd-438e-4305-9041-5001548a0a10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:12:02 crc kubenswrapper[4775]: I1125 20:12:02.997033 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921b3dcd-438e-4305-9041-5001548a0a10-kube-api-access-x27cb" (OuterVolumeSpecName: "kube-api-access-x27cb") pod "921b3dcd-438e-4305-9041-5001548a0a10" (UID: "921b3dcd-438e-4305-9041-5001548a0a10"). InnerVolumeSpecName "kube-api-access-x27cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.011661 4775 scope.go:117] "RemoveContainer" containerID="679dc2ce039bbdcd16a37eed9236aa8204cd44e8ae6516a0030fe54ca89a6011" Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.049316 4775 scope.go:117] "RemoveContainer" containerID="a42aaebe03cca72d6aea2d379d484e46c72c6263b031d5b73a7e904507593bf0" Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.059888 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921b3dcd-438e-4305-9041-5001548a0a10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "921b3dcd-438e-4305-9041-5001548a0a10" (UID: "921b3dcd-438e-4305-9041-5001548a0a10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.094937 4775 scope.go:117] "RemoveContainer" containerID="e37951d6744a00c7ae1defb38baf14a0dafad0b54481c83652ccaa048a9cfe41" Nov 25 20:12:03 crc kubenswrapper[4775]: E1125 20:12:03.095466 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e37951d6744a00c7ae1defb38baf14a0dafad0b54481c83652ccaa048a9cfe41\": container with ID starting with e37951d6744a00c7ae1defb38baf14a0dafad0b54481c83652ccaa048a9cfe41 not found: ID does not exist" containerID="e37951d6744a00c7ae1defb38baf14a0dafad0b54481c83652ccaa048a9cfe41" Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.095498 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e37951d6744a00c7ae1defb38baf14a0dafad0b54481c83652ccaa048a9cfe41"} err="failed to get container status \"e37951d6744a00c7ae1defb38baf14a0dafad0b54481c83652ccaa048a9cfe41\": rpc error: code = NotFound desc = could not find container \"e37951d6744a00c7ae1defb38baf14a0dafad0b54481c83652ccaa048a9cfe41\": container with ID starting with e37951d6744a00c7ae1defb38baf14a0dafad0b54481c83652ccaa048a9cfe41 not found: ID does not exist" Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.095533 4775 scope.go:117] "RemoveContainer" containerID="679dc2ce039bbdcd16a37eed9236aa8204cd44e8ae6516a0030fe54ca89a6011" Nov 25 20:12:03 crc kubenswrapper[4775]: E1125 20:12:03.095759 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"679dc2ce039bbdcd16a37eed9236aa8204cd44e8ae6516a0030fe54ca89a6011\": container with ID starting with 679dc2ce039bbdcd16a37eed9236aa8204cd44e8ae6516a0030fe54ca89a6011 not found: ID does not exist" containerID="679dc2ce039bbdcd16a37eed9236aa8204cd44e8ae6516a0030fe54ca89a6011" Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.095804 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"679dc2ce039bbdcd16a37eed9236aa8204cd44e8ae6516a0030fe54ca89a6011"} err="failed to get container status \"679dc2ce039bbdcd16a37eed9236aa8204cd44e8ae6516a0030fe54ca89a6011\": rpc error: code = NotFound desc = could not find container \"679dc2ce039bbdcd16a37eed9236aa8204cd44e8ae6516a0030fe54ca89a6011\": container with ID starting with 679dc2ce039bbdcd16a37eed9236aa8204cd44e8ae6516a0030fe54ca89a6011 not found: ID does not exist" Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.095818 4775 scope.go:117] "RemoveContainer" containerID="a42aaebe03cca72d6aea2d379d484e46c72c6263b031d5b73a7e904507593bf0" Nov 25 20:12:03 crc kubenswrapper[4775]: E1125 20:12:03.096252 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a42aaebe03cca72d6aea2d379d484e46c72c6263b031d5b73a7e904507593bf0\": container with ID starting with a42aaebe03cca72d6aea2d379d484e46c72c6263b031d5b73a7e904507593bf0 not found: ID does not exist" containerID="a42aaebe03cca72d6aea2d379d484e46c72c6263b031d5b73a7e904507593bf0" Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.096293 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a42aaebe03cca72d6aea2d379d484e46c72c6263b031d5b73a7e904507593bf0"} err="failed to get container status \"a42aaebe03cca72d6aea2d379d484e46c72c6263b031d5b73a7e904507593bf0\": rpc error: code = NotFound desc = could not find container \"a42aaebe03cca72d6aea2d379d484e46c72c6263b031d5b73a7e904507593bf0\": container with ID starting with a42aaebe03cca72d6aea2d379d484e46c72c6263b031d5b73a7e904507593bf0 not found: ID does not exist" Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.097256 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x27cb\" (UniqueName: \"kubernetes.io/projected/921b3dcd-438e-4305-9041-5001548a0a10-kube-api-access-x27cb\") on node \"crc\" DevicePath \"\"" Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.097444 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921b3dcd-438e-4305-9041-5001548a0a10-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.097582 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921b3dcd-438e-4305-9041-5001548a0a10-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.339345 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmcnf"] Nov 25 20:12:03 crc kubenswrapper[4775]: I1125 20:12:03.349245 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nmcnf"] Nov 25 20:12:04 crc kubenswrapper[4775]: I1125 20:12:04.860844 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921b3dcd-438e-4305-9041-5001548a0a10" path="/var/lib/kubelet/pods/921b3dcd-438e-4305-9041-5001548a0a10/volumes" Nov 25 20:12:12 crc kubenswrapper[4775]: I1125 20:12:12.847262 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:12:12 crc kubenswrapper[4775]: E1125 20:12:12.849313 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:12:26 crc kubenswrapper[4775]: I1125 20:12:26.848960 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:12:26 crc kubenswrapper[4775]: E1125 20:12:26.850586 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:12:31 crc kubenswrapper[4775]: I1125 20:12:31.296359 4775 generic.go:334] "Generic (PLEG): container finished" podID="ed47c3bd-5136-4d5a-946b-924498853472" containerID="8f33641947f42af62be3ce1f9e4098cd55ed1b1466b5551f134c01934230ba8e" exitCode=0 Nov 25 20:12:31 crc kubenswrapper[4775]: I1125 20:12:31.296502 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" event={"ID":"ed47c3bd-5136-4d5a-946b-924498853472","Type":"ContainerDied","Data":"8f33641947f42af62be3ce1f9e4098cd55ed1b1466b5551f134c01934230ba8e"} Nov 25 20:12:32 crc kubenswrapper[4775]: I1125 20:12:32.823258 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:12:32 crc kubenswrapper[4775]: I1125 20:12:32.954219 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-ssh-key\") pod \"ed47c3bd-5136-4d5a-946b-924498853472\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " Nov 25 20:12:32 crc kubenswrapper[4775]: I1125 20:12:32.954323 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-inventory\") pod \"ed47c3bd-5136-4d5a-946b-924498853472\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " Nov 25 20:12:32 crc kubenswrapper[4775]: I1125 20:12:32.954448 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql9bp\" (UniqueName: \"kubernetes.io/projected/ed47c3bd-5136-4d5a-946b-924498853472-kube-api-access-ql9bp\") pod \"ed47c3bd-5136-4d5a-946b-924498853472\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " Nov 25 20:12:32 crc kubenswrapper[4775]: I1125 20:12:32.954473 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-ceph\") pod \"ed47c3bd-5136-4d5a-946b-924498853472\" (UID: \"ed47c3bd-5136-4d5a-946b-924498853472\") " Nov 25 20:12:32 crc kubenswrapper[4775]: I1125 20:12:32.960939 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-ceph" (OuterVolumeSpecName: "ceph") pod "ed47c3bd-5136-4d5a-946b-924498853472" (UID: "ed47c3bd-5136-4d5a-946b-924498853472"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:12:32 crc kubenswrapper[4775]: I1125 20:12:32.970105 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed47c3bd-5136-4d5a-946b-924498853472-kube-api-access-ql9bp" (OuterVolumeSpecName: "kube-api-access-ql9bp") pod "ed47c3bd-5136-4d5a-946b-924498853472" (UID: "ed47c3bd-5136-4d5a-946b-924498853472"). InnerVolumeSpecName "kube-api-access-ql9bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:12:32 crc kubenswrapper[4775]: I1125 20:12:32.986593 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-inventory" (OuterVolumeSpecName: "inventory") pod "ed47c3bd-5136-4d5a-946b-924498853472" (UID: "ed47c3bd-5136-4d5a-946b-924498853472"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:12:32 crc kubenswrapper[4775]: I1125 20:12:32.991237 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ed47c3bd-5136-4d5a-946b-924498853472" (UID: "ed47c3bd-5136-4d5a-946b-924498853472"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.057863 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.057921 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.057946 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql9bp\" (UniqueName: \"kubernetes.io/projected/ed47c3bd-5136-4d5a-946b-924498853472-kube-api-access-ql9bp\") on node \"crc\" DevicePath \"\"" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.057969 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed47c3bd-5136-4d5a-946b-924498853472-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.319089 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" event={"ID":"ed47c3bd-5136-4d5a-946b-924498853472","Type":"ContainerDied","Data":"a7c5d5dbf2ccd17ada1b94bca2e90645f5cddd816eaddead82ed6033867872b2"} Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.319128 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7c5d5dbf2ccd17ada1b94bca2e90645f5cddd816eaddead82ed6033867872b2" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.319192 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.439776 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jbgj4"] Nov 25 20:12:33 crc kubenswrapper[4775]: E1125 20:12:33.440357 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed47c3bd-5136-4d5a-946b-924498853472" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.440390 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed47c3bd-5136-4d5a-946b-924498853472" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:12:33 crc kubenswrapper[4775]: E1125 20:12:33.440422 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921b3dcd-438e-4305-9041-5001548a0a10" containerName="extract-utilities" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.440438 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="921b3dcd-438e-4305-9041-5001548a0a10" containerName="extract-utilities" Nov 25 20:12:33 crc kubenswrapper[4775]: E1125 20:12:33.440479 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921b3dcd-438e-4305-9041-5001548a0a10" containerName="extract-content" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.440494 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="921b3dcd-438e-4305-9041-5001548a0a10" containerName="extract-content" Nov 25 20:12:33 crc kubenswrapper[4775]: E1125 20:12:33.440533 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921b3dcd-438e-4305-9041-5001548a0a10" containerName="registry-server" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.440545 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="921b3dcd-438e-4305-9041-5001548a0a10" containerName="registry-server" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.440893 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="921b3dcd-438e-4305-9041-5001548a0a10" containerName="registry-server" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.440928 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed47c3bd-5136-4d5a-946b-924498853472" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.441971 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.446426 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.446622 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.446475 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.447067 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.451000 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.453947 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jbgj4"] Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.566344 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mslp4\" (UniqueName: \"kubernetes.io/projected/dd8d944f-8e84-4b3c-a92f-d8a815571a85-kube-api-access-mslp4\") pod \"ssh-known-hosts-edpm-deployment-jbgj4\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.566518 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-ceph\") pod \"ssh-known-hosts-edpm-deployment-jbgj4\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.566738 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jbgj4\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.566917 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jbgj4\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.668754 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mslp4\" (UniqueName: \"kubernetes.io/projected/dd8d944f-8e84-4b3c-a92f-d8a815571a85-kube-api-access-mslp4\") pod \"ssh-known-hosts-edpm-deployment-jbgj4\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.668937 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-ceph\") pod \"ssh-known-hosts-edpm-deployment-jbgj4\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.669020 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jbgj4\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.670110 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jbgj4\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.673724 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jbgj4\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.674005 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-ceph\") pod \"ssh-known-hosts-edpm-deployment-jbgj4\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.674387 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jbgj4\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.705476 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mslp4\" (UniqueName: \"kubernetes.io/projected/dd8d944f-8e84-4b3c-a92f-d8a815571a85-kube-api-access-mslp4\") pod \"ssh-known-hosts-edpm-deployment-jbgj4\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:33 crc kubenswrapper[4775]: I1125 20:12:33.765604 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:34 crc kubenswrapper[4775]: I1125 20:12:34.305153 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jbgj4"] Nov 25 20:12:34 crc kubenswrapper[4775]: I1125 20:12:34.335842 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" event={"ID":"dd8d944f-8e84-4b3c-a92f-d8a815571a85","Type":"ContainerStarted","Data":"035cc547d8c0424435f3d5ec0d924d36f7cb3d8544f6ae8d44d75e114d2d12d7"} Nov 25 20:12:35 crc kubenswrapper[4775]: I1125 20:12:35.350274 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" event={"ID":"dd8d944f-8e84-4b3c-a92f-d8a815571a85","Type":"ContainerStarted","Data":"8682be88fb748d2c9a2b171fece3fdc5d8f99e6447d1caa5bd4f5a8561b44972"} Nov 25 20:12:35 crc kubenswrapper[4775]: I1125 20:12:35.380314 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" podStartSLOduration=1.924948363 podStartE2EDuration="2.380290006s" podCreationTimestamp="2025-11-25 20:12:33 +0000 UTC" firstStartedPulling="2025-11-25 20:12:34.307481617 +0000 UTC m=+2336.223843993" lastFinishedPulling="2025-11-25 20:12:34.76282323 +0000 UTC m=+2336.679185636" observedRunningTime="2025-11-25 20:12:35.371507529 +0000 UTC m=+2337.287869905" watchObservedRunningTime="2025-11-25 20:12:35.380290006 +0000 UTC m=+2337.296652382" Nov 25 20:12:37 crc kubenswrapper[4775]: I1125 20:12:37.846842 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:12:37 crc kubenswrapper[4775]: E1125 20:12:37.847464 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:12:45 crc kubenswrapper[4775]: I1125 20:12:45.459933 4775 generic.go:334] "Generic (PLEG): container finished" podID="dd8d944f-8e84-4b3c-a92f-d8a815571a85" containerID="8682be88fb748d2c9a2b171fece3fdc5d8f99e6447d1caa5bd4f5a8561b44972" exitCode=0 Nov 25 20:12:45 crc kubenswrapper[4775]: I1125 20:12:45.460048 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" event={"ID":"dd8d944f-8e84-4b3c-a92f-d8a815571a85","Type":"ContainerDied","Data":"8682be88fb748d2c9a2b171fece3fdc5d8f99e6447d1caa5bd4f5a8561b44972"} Nov 25 20:12:46 crc kubenswrapper[4775]: I1125 20:12:46.899342 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.035230 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mslp4\" (UniqueName: \"kubernetes.io/projected/dd8d944f-8e84-4b3c-a92f-d8a815571a85-kube-api-access-mslp4\") pod \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.035289 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-inventory-0\") pod \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.035554 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-ssh-key-openstack-edpm-ipam\") pod \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.035734 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-ceph\") pod \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\" (UID: \"dd8d944f-8e84-4b3c-a92f-d8a815571a85\") " Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.041903 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-ceph" (OuterVolumeSpecName: "ceph") pod "dd8d944f-8e84-4b3c-a92f-d8a815571a85" (UID: "dd8d944f-8e84-4b3c-a92f-d8a815571a85"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.042326 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8d944f-8e84-4b3c-a92f-d8a815571a85-kube-api-access-mslp4" (OuterVolumeSpecName: "kube-api-access-mslp4") pod "dd8d944f-8e84-4b3c-a92f-d8a815571a85" (UID: "dd8d944f-8e84-4b3c-a92f-d8a815571a85"). InnerVolumeSpecName "kube-api-access-mslp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.076699 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd8d944f-8e84-4b3c-a92f-d8a815571a85" (UID: "dd8d944f-8e84-4b3c-a92f-d8a815571a85"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.077728 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "dd8d944f-8e84-4b3c-a92f-d8a815571a85" (UID: "dd8d944f-8e84-4b3c-a92f-d8a815571a85"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.138182 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.138224 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.138236 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mslp4\" (UniqueName: \"kubernetes.io/projected/dd8d944f-8e84-4b3c-a92f-d8a815571a85-kube-api-access-mslp4\") on node \"crc\" DevicePath \"\"" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.138248 4775 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd8d944f-8e84-4b3c-a92f-d8a815571a85-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.503094 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" event={"ID":"dd8d944f-8e84-4b3c-a92f-d8a815571a85","Type":"ContainerDied","Data":"035cc547d8c0424435f3d5ec0d924d36f7cb3d8544f6ae8d44d75e114d2d12d7"} Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.503272 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="035cc547d8c0424435f3d5ec0d924d36f7cb3d8544f6ae8d44d75e114d2d12d7" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.503182 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jbgj4" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.590212 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr"] Nov 25 20:12:47 crc kubenswrapper[4775]: E1125 20:12:47.590627 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8d944f-8e84-4b3c-a92f-d8a815571a85" containerName="ssh-known-hosts-edpm-deployment" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.590679 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8d944f-8e84-4b3c-a92f-d8a815571a85" containerName="ssh-known-hosts-edpm-deployment" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.591036 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8d944f-8e84-4b3c-a92f-d8a815571a85" containerName="ssh-known-hosts-edpm-deployment" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.592435 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.596748 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.597152 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.597310 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.597686 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.598005 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.603777 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr"] Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.748346 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvfr\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.748392 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv5rg\" (UniqueName: \"kubernetes.io/projected/b37d2556-fc78-4546-b531-ce2cebc6e8ec-kube-api-access-wv5rg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvfr\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.748436 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvfr\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.748531 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvfr\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.849940 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvfr\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.850061 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvfr\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.850088 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv5rg\" (UniqueName: \"kubernetes.io/projected/b37d2556-fc78-4546-b531-ce2cebc6e8ec-kube-api-access-wv5rg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvfr\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.850125 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvfr\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.854718 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvfr\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.854925 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvfr\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.856581 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvfr\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.879188 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv5rg\" (UniqueName: \"kubernetes.io/projected/b37d2556-fc78-4546-b531-ce2cebc6e8ec-kube-api-access-wv5rg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvfr\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:47 crc kubenswrapper[4775]: I1125 20:12:47.956830 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:12:48 crc kubenswrapper[4775]: I1125 20:12:48.317618 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr"] Nov 25 20:12:48 crc kubenswrapper[4775]: I1125 20:12:48.512069 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" event={"ID":"b37d2556-fc78-4546-b531-ce2cebc6e8ec","Type":"ContainerStarted","Data":"b90209b9662543960c0ab11081acadfcd6e3069ac24871f1f4695d58924f0806"} Nov 25 20:12:49 crc kubenswrapper[4775]: I1125 20:12:49.524783 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" event={"ID":"b37d2556-fc78-4546-b531-ce2cebc6e8ec","Type":"ContainerStarted","Data":"72cbb75ecccafbfb19e5e593a45a4b7d7c7267dc4607232e370c49135b95e40d"} Nov 25 20:12:49 crc kubenswrapper[4775]: I1125 20:12:49.567633 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" podStartSLOduration=2.072793057 podStartE2EDuration="2.56760553s" podCreationTimestamp="2025-11-25 20:12:47 +0000 UTC" firstStartedPulling="2025-11-25 20:12:48.325929264 +0000 UTC m=+2350.242291670" lastFinishedPulling="2025-11-25 20:12:48.820741767 +0000 UTC m=+2350.737104143" observedRunningTime="2025-11-25 20:12:49.557479296 +0000 UTC m=+2351.473841672" watchObservedRunningTime="2025-11-25 20:12:49.56760553 +0000 UTC m=+2351.483967936" Nov 25 20:12:50 crc kubenswrapper[4775]: I1125 20:12:50.846750 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:12:50 crc kubenswrapper[4775]: E1125 20:12:50.848831 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:12:58 crc kubenswrapper[4775]: I1125 20:12:58.623723 4775 generic.go:334] "Generic (PLEG): container finished" podID="b37d2556-fc78-4546-b531-ce2cebc6e8ec" containerID="72cbb75ecccafbfb19e5e593a45a4b7d7c7267dc4607232e370c49135b95e40d" exitCode=0 Nov 25 20:12:58 crc kubenswrapper[4775]: I1125 20:12:58.623809 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" event={"ID":"b37d2556-fc78-4546-b531-ce2cebc6e8ec","Type":"ContainerDied","Data":"72cbb75ecccafbfb19e5e593a45a4b7d7c7267dc4607232e370c49135b95e40d"} Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.116148 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.171402 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv5rg\" (UniqueName: \"kubernetes.io/projected/b37d2556-fc78-4546-b531-ce2cebc6e8ec-kube-api-access-wv5rg\") pod \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.196142 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37d2556-fc78-4546-b531-ce2cebc6e8ec-kube-api-access-wv5rg" (OuterVolumeSpecName: "kube-api-access-wv5rg") pod "b37d2556-fc78-4546-b531-ce2cebc6e8ec" (UID: "b37d2556-fc78-4546-b531-ce2cebc6e8ec"). InnerVolumeSpecName "kube-api-access-wv5rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.273350 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-ssh-key\") pod \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.273424 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-ceph\") pod \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.273442 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-inventory\") pod \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\" (UID: \"b37d2556-fc78-4546-b531-ce2cebc6e8ec\") " Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.273800 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv5rg\" (UniqueName: \"kubernetes.io/projected/b37d2556-fc78-4546-b531-ce2cebc6e8ec-kube-api-access-wv5rg\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.277086 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-ceph" (OuterVolumeSpecName: "ceph") pod "b37d2556-fc78-4546-b531-ce2cebc6e8ec" (UID: "b37d2556-fc78-4546-b531-ce2cebc6e8ec"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.307181 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b37d2556-fc78-4546-b531-ce2cebc6e8ec" (UID: "b37d2556-fc78-4546-b531-ce2cebc6e8ec"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.311949 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-inventory" (OuterVolumeSpecName: "inventory") pod "b37d2556-fc78-4546-b531-ce2cebc6e8ec" (UID: "b37d2556-fc78-4546-b531-ce2cebc6e8ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.376271 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.376322 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.376345 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b37d2556-fc78-4546-b531-ce2cebc6e8ec-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.650011 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" event={"ID":"b37d2556-fc78-4546-b531-ce2cebc6e8ec","Type":"ContainerDied","Data":"b90209b9662543960c0ab11081acadfcd6e3069ac24871f1f4695d58924f0806"} Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.650079 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b90209b9662543960c0ab11081acadfcd6e3069ac24871f1f4695d58924f0806" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.650104 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvfr" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.750429 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz"] Nov 25 20:13:00 crc kubenswrapper[4775]: E1125 20:13:00.750876 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37d2556-fc78-4546-b531-ce2cebc6e8ec" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.750899 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37d2556-fc78-4546-b531-ce2cebc6e8ec" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.751110 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37d2556-fc78-4546-b531-ce2cebc6e8ec" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.751842 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.756190 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.756998 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.757818 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.758759 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.759156 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.773325 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz"] Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.785099 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtsgg\" (UniqueName: \"kubernetes.io/projected/87220c3d-2b9e-4ffb-bec9-df48355c9aac-kube-api-access-wtsgg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.785205 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.785258 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.785388 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.887777 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.887862 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.887934 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.888095 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtsgg\" (UniqueName: \"kubernetes.io/projected/87220c3d-2b9e-4ffb-bec9-df48355c9aac-kube-api-access-wtsgg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.893772 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.894537 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.898082 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:00 crc kubenswrapper[4775]: I1125 20:13:00.910167 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtsgg\" (UniqueName: \"kubernetes.io/projected/87220c3d-2b9e-4ffb-bec9-df48355c9aac-kube-api-access-wtsgg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:01 crc kubenswrapper[4775]: I1125 20:13:01.090239 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:01 crc kubenswrapper[4775]: I1125 20:13:01.711835 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz"] Nov 25 20:13:02 crc kubenswrapper[4775]: I1125 20:13:02.668967 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" event={"ID":"87220c3d-2b9e-4ffb-bec9-df48355c9aac","Type":"ContainerStarted","Data":"f2fd2f7aca3e6a5504e83e9b74427aa4aeeb558049d6974a765a9bc8094be02b"} Nov 25 20:13:02 crc kubenswrapper[4775]: I1125 20:13:02.669486 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" event={"ID":"87220c3d-2b9e-4ffb-bec9-df48355c9aac","Type":"ContainerStarted","Data":"cc73c27dd8ef87f27f296ee705aafd1b6eb6b1a5a5b9f5a9bff60de3b46f4533"} Nov 25 20:13:02 crc kubenswrapper[4775]: I1125 20:13:02.697594 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" podStartSLOduration=2.196816571 podStartE2EDuration="2.697568974s" podCreationTimestamp="2025-11-25 20:13:00 +0000 UTC" firstStartedPulling="2025-11-25 20:13:01.73709381 +0000 UTC m=+2363.653456176" lastFinishedPulling="2025-11-25 20:13:02.237846213 +0000 UTC m=+2364.154208579" observedRunningTime="2025-11-25 20:13:02.685712383 +0000 UTC m=+2364.602074789" watchObservedRunningTime="2025-11-25 20:13:02.697568974 +0000 UTC m=+2364.613931350" Nov 25 20:13:03 crc kubenswrapper[4775]: I1125 20:13:03.847223 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:13:03 crc kubenswrapper[4775]: E1125 20:13:03.847754 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:13:13 crc kubenswrapper[4775]: I1125 20:13:13.788430 4775 generic.go:334] "Generic (PLEG): container finished" podID="87220c3d-2b9e-4ffb-bec9-df48355c9aac" containerID="f2fd2f7aca3e6a5504e83e9b74427aa4aeeb558049d6974a765a9bc8094be02b" exitCode=0 Nov 25 20:13:13 crc kubenswrapper[4775]: I1125 20:13:13.788566 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" event={"ID":"87220c3d-2b9e-4ffb-bec9-df48355c9aac","Type":"ContainerDied","Data":"f2fd2f7aca3e6a5504e83e9b74427aa4aeeb558049d6974a765a9bc8094be02b"} Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.310055 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.475698 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-ssh-key\") pod \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.475780 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtsgg\" (UniqueName: \"kubernetes.io/projected/87220c3d-2b9e-4ffb-bec9-df48355c9aac-kube-api-access-wtsgg\") pod \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.475908 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-inventory\") pod \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.476046 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-ceph\") pod \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\" (UID: \"87220c3d-2b9e-4ffb-bec9-df48355c9aac\") " Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.481124 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87220c3d-2b9e-4ffb-bec9-df48355c9aac-kube-api-access-wtsgg" (OuterVolumeSpecName: "kube-api-access-wtsgg") pod "87220c3d-2b9e-4ffb-bec9-df48355c9aac" (UID: "87220c3d-2b9e-4ffb-bec9-df48355c9aac"). InnerVolumeSpecName "kube-api-access-wtsgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.491810 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-ceph" (OuterVolumeSpecName: "ceph") pod "87220c3d-2b9e-4ffb-bec9-df48355c9aac" (UID: "87220c3d-2b9e-4ffb-bec9-df48355c9aac"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.518441 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "87220c3d-2b9e-4ffb-bec9-df48355c9aac" (UID: "87220c3d-2b9e-4ffb-bec9-df48355c9aac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.525830 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-inventory" (OuterVolumeSpecName: "inventory") pod "87220c3d-2b9e-4ffb-bec9-df48355c9aac" (UID: "87220c3d-2b9e-4ffb-bec9-df48355c9aac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.577976 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtsgg\" (UniqueName: \"kubernetes.io/projected/87220c3d-2b9e-4ffb-bec9-df48355c9aac-kube-api-access-wtsgg\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.578011 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.578023 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.578032 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87220c3d-2b9e-4ffb-bec9-df48355c9aac-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.819803 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" event={"ID":"87220c3d-2b9e-4ffb-bec9-df48355c9aac","Type":"ContainerDied","Data":"cc73c27dd8ef87f27f296ee705aafd1b6eb6b1a5a5b9f5a9bff60de3b46f4533"} Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.819858 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc73c27dd8ef87f27f296ee705aafd1b6eb6b1a5a5b9f5a9bff60de3b46f4533" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.819875 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.928182 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x"] Nov 25 20:13:15 crc kubenswrapper[4775]: E1125 20:13:15.928586 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87220c3d-2b9e-4ffb-bec9-df48355c9aac" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.928617 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="87220c3d-2b9e-4ffb-bec9-df48355c9aac" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.928845 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="87220c3d-2b9e-4ffb-bec9-df48355c9aac" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.929562 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.939701 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x"] Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.941241 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.941241 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.941310 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.941465 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.941789 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.956279 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.956633 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:13:15 crc kubenswrapper[4775]: I1125 20:13:15.956871 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.088707 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.089130 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.089221 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.089267 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.089377 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.089438 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.089480 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.089707 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.089821 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.090012 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.090124 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.090212 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zhdz\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-kube-api-access-6zhdz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.090313 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.192822 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.192940 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.193010 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.193058 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.193109 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.193157 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.193217 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.193272 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.193312 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zhdz\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-kube-api-access-6zhdz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.193533 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.193614 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.193735 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.193787 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.199848 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.201302 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.201498 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.202313 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.203070 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.203573 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.203838 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.203940 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.205344 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.206142 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.207623 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.208615 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.233276 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zhdz\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-kube-api-access-6zhdz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-84l2x\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.258282 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.665058 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x"] Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.670197 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:13:16 crc kubenswrapper[4775]: I1125 20:13:16.829728 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" event={"ID":"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5","Type":"ContainerStarted","Data":"ef1d45dd132cae8bc8b723dcc66efa45899069e0bbc66b30c24e323652b678b0"} Nov 25 20:13:17 crc kubenswrapper[4775]: I1125 20:13:17.843158 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" event={"ID":"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5","Type":"ContainerStarted","Data":"b558b18e63ec1319c94297fab77770e22a53bb13c0e7d786e2c11875ba7fdda2"} Nov 25 20:13:17 crc kubenswrapper[4775]: I1125 20:13:17.847911 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:13:17 crc kubenswrapper[4775]: E1125 20:13:17.848377 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:13:17 crc kubenswrapper[4775]: I1125 20:13:17.873756 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" podStartSLOduration=2.434423094 podStartE2EDuration="2.873724672s" podCreationTimestamp="2025-11-25 20:13:15 +0000 UTC" firstStartedPulling="2025-11-25 20:13:16.669937442 +0000 UTC m=+2378.586299818" lastFinishedPulling="2025-11-25 20:13:17.10923903 +0000 UTC m=+2379.025601396" observedRunningTime="2025-11-25 20:13:17.870961067 +0000 UTC m=+2379.787323473" watchObservedRunningTime="2025-11-25 20:13:17.873724672 +0000 UTC m=+2379.790087098" Nov 25 20:13:28 crc kubenswrapper[4775]: I1125 20:13:28.852594 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:13:28 crc kubenswrapper[4775]: E1125 20:13:28.853708 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:13:40 crc kubenswrapper[4775]: I1125 20:13:40.848070 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:13:40 crc kubenswrapper[4775]: E1125 20:13:40.849327 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:13:53 crc kubenswrapper[4775]: I1125 20:13:53.847818 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:13:53 crc kubenswrapper[4775]: E1125 20:13:53.849035 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:13:55 crc kubenswrapper[4775]: I1125 20:13:55.283363 4775 generic.go:334] "Generic (PLEG): container finished" podID="57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" containerID="b558b18e63ec1319c94297fab77770e22a53bb13c0e7d786e2c11875ba7fdda2" exitCode=0 Nov 25 20:13:55 crc kubenswrapper[4775]: I1125 20:13:55.283594 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" event={"ID":"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5","Type":"ContainerDied","Data":"b558b18e63ec1319c94297fab77770e22a53bb13c0e7d786e2c11875ba7fdda2"} Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.738899 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.924096 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.924160 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-bootstrap-combined-ca-bundle\") pod \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.924205 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.924252 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-repo-setup-combined-ca-bundle\") pod \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.924488 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.924785 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ceph\") pod \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.925977 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ssh-key\") pod \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.926072 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zhdz\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-kube-api-access-6zhdz\") pod \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.926121 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-inventory\") pod \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.926190 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-neutron-metadata-combined-ca-bundle\") pod \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.926246 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ovn-combined-ca-bundle\") pod \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.926285 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-nova-combined-ca-bundle\") pod \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.926421 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-libvirt-combined-ca-bundle\") pod \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\" (UID: \"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5\") " Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.933073 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" (UID: "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.933803 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ceph" (OuterVolumeSpecName: "ceph") pod "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" (UID: "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.934463 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" (UID: "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.934514 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" (UID: "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.934794 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" (UID: "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.935529 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" (UID: "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.937248 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" (UID: "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.940247 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" (UID: "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.941874 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" (UID: "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.941973 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-kube-api-access-6zhdz" (OuterVolumeSpecName: "kube-api-access-6zhdz") pod "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" (UID: "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5"). InnerVolumeSpecName "kube-api-access-6zhdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.948905 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" (UID: "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.978545 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" (UID: "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:56 crc kubenswrapper[4775]: I1125 20:13:56.984792 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-inventory" (OuterVolumeSpecName: "inventory") pod "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" (UID: "57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.030044 4775 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.030093 4775 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.030116 4775 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.030134 4775 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.030154 4775 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.030171 4775 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.030191 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.030208 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.030225 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zhdz\" (UniqueName: \"kubernetes.io/projected/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-kube-api-access-6zhdz\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.030239 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.030254 4775 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.030273 4775 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.030289 4775 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.304281 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" event={"ID":"57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5","Type":"ContainerDied","Data":"ef1d45dd132cae8bc8b723dcc66efa45899069e0bbc66b30c24e323652b678b0"} Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.304683 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef1d45dd132cae8bc8b723dcc66efa45899069e0bbc66b30c24e323652b678b0" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.304397 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-84l2x" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.427047 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs"] Nov 25 20:13:57 crc kubenswrapper[4775]: E1125 20:13:57.427733 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.427764 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.428094 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.429203 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.434388 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs"] Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.439740 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.439843 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpxbt\" (UniqueName: \"kubernetes.io/projected/e24bbfc2-37c0-4052-95af-f338b0872857-kube-api-access-jpxbt\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.439910 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.440067 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.449449 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.449732 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.450029 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.450174 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.451122 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.541626 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.541695 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpxbt\" (UniqueName: \"kubernetes.io/projected/e24bbfc2-37c0-4052-95af-f338b0872857-kube-api-access-jpxbt\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.541721 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.541802 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.548147 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.552551 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.554601 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.578120 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpxbt\" (UniqueName: \"kubernetes.io/projected/e24bbfc2-37c0-4052-95af-f338b0872857-kube-api-access-jpxbt\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:57 crc kubenswrapper[4775]: I1125 20:13:57.781887 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:13:58 crc kubenswrapper[4775]: W1125 20:13:58.385083 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode24bbfc2_37c0_4052_95af_f338b0872857.slice/crio-d8290200a36352ad3f82fddee144248233e45b8b4edf8723933f0110612f5646 WatchSource:0}: Error finding container d8290200a36352ad3f82fddee144248233e45b8b4edf8723933f0110612f5646: Status 404 returned error can't find the container with id d8290200a36352ad3f82fddee144248233e45b8b4edf8723933f0110612f5646 Nov 25 20:13:58 crc kubenswrapper[4775]: I1125 20:13:58.396348 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs"] Nov 25 20:13:59 crc kubenswrapper[4775]: I1125 20:13:59.328640 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" event={"ID":"e24bbfc2-37c0-4052-95af-f338b0872857","Type":"ContainerStarted","Data":"40aef65087d22b3cda04af92f1e3aa6b302d296bde4d5586cf6f018be899166e"} Nov 25 20:13:59 crc kubenswrapper[4775]: I1125 20:13:59.328931 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" event={"ID":"e24bbfc2-37c0-4052-95af-f338b0872857","Type":"ContainerStarted","Data":"d8290200a36352ad3f82fddee144248233e45b8b4edf8723933f0110612f5646"} Nov 25 20:14:05 crc kubenswrapper[4775]: I1125 20:14:05.387081 4775 generic.go:334] "Generic (PLEG): container finished" podID="e24bbfc2-37c0-4052-95af-f338b0872857" containerID="40aef65087d22b3cda04af92f1e3aa6b302d296bde4d5586cf6f018be899166e" exitCode=0 Nov 25 20:14:05 crc kubenswrapper[4775]: I1125 20:14:05.387151 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" event={"ID":"e24bbfc2-37c0-4052-95af-f338b0872857","Type":"ContainerDied","Data":"40aef65087d22b3cda04af92f1e3aa6b302d296bde4d5586cf6f018be899166e"} Nov 25 20:14:06 crc kubenswrapper[4775]: I1125 20:14:06.877444 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.052326 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-ssh-key\") pod \"e24bbfc2-37c0-4052-95af-f338b0872857\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.052371 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-ceph\") pod \"e24bbfc2-37c0-4052-95af-f338b0872857\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.052550 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpxbt\" (UniqueName: \"kubernetes.io/projected/e24bbfc2-37c0-4052-95af-f338b0872857-kube-api-access-jpxbt\") pod \"e24bbfc2-37c0-4052-95af-f338b0872857\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.052576 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-inventory\") pod \"e24bbfc2-37c0-4052-95af-f338b0872857\" (UID: \"e24bbfc2-37c0-4052-95af-f338b0872857\") " Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.065879 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-ceph" (OuterVolumeSpecName: "ceph") pod "e24bbfc2-37c0-4052-95af-f338b0872857" (UID: "e24bbfc2-37c0-4052-95af-f338b0872857"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.065945 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24bbfc2-37c0-4052-95af-f338b0872857-kube-api-access-jpxbt" (OuterVolumeSpecName: "kube-api-access-jpxbt") pod "e24bbfc2-37c0-4052-95af-f338b0872857" (UID: "e24bbfc2-37c0-4052-95af-f338b0872857"). InnerVolumeSpecName "kube-api-access-jpxbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.086329 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-inventory" (OuterVolumeSpecName: "inventory") pod "e24bbfc2-37c0-4052-95af-f338b0872857" (UID: "e24bbfc2-37c0-4052-95af-f338b0872857"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.109277 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e24bbfc2-37c0-4052-95af-f338b0872857" (UID: "e24bbfc2-37c0-4052-95af-f338b0872857"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.155954 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpxbt\" (UniqueName: \"kubernetes.io/projected/e24bbfc2-37c0-4052-95af-f338b0872857-kube-api-access-jpxbt\") on node \"crc\" DevicePath \"\"" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.156409 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.156438 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.156464 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e24bbfc2-37c0-4052-95af-f338b0872857-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.413935 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" event={"ID":"e24bbfc2-37c0-4052-95af-f338b0872857","Type":"ContainerDied","Data":"d8290200a36352ad3f82fddee144248233e45b8b4edf8723933f0110612f5646"} Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.413989 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8290200a36352ad3f82fddee144248233e45b8b4edf8723933f0110612f5646" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.414022 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.518289 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g"] Nov 25 20:14:07 crc kubenswrapper[4775]: E1125 20:14:07.518964 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24bbfc2-37c0-4052-95af-f338b0872857" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.518999 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24bbfc2-37c0-4052-95af-f338b0872857" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.519369 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24bbfc2-37c0-4052-95af-f338b0872857" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.520474 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.523247 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.525141 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.526182 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.526640 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.527049 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.528572 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.530426 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g"] Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.667270 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr4rq\" (UniqueName: \"kubernetes.io/projected/0d5edebb-e2fd-4744-b994-2559c10c9947-kube-api-access-qr4rq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.667337 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0d5edebb-e2fd-4744-b994-2559c10c9947-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.667381 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.667404 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.667441 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.667501 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.769367 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr4rq\" (UniqueName: \"kubernetes.io/projected/0d5edebb-e2fd-4744-b994-2559c10c9947-kube-api-access-qr4rq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.769460 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0d5edebb-e2fd-4744-b994-2559c10c9947-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.769530 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.769564 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.769618 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.769743 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.771470 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0d5edebb-e2fd-4744-b994-2559c10c9947-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.777640 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.777784 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.778638 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.782624 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.802603 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr4rq\" (UniqueName: \"kubernetes.io/projected/0d5edebb-e2fd-4744-b994-2559c10c9947-kube-api-access-qr4rq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slq8g\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:07 crc kubenswrapper[4775]: I1125 20:14:07.847727 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:14:08 crc kubenswrapper[4775]: I1125 20:14:08.447984 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g"] Nov 25 20:14:08 crc kubenswrapper[4775]: I1125 20:14:08.864866 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:14:08 crc kubenswrapper[4775]: E1125 20:14:08.865730 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:14:09 crc kubenswrapper[4775]: I1125 20:14:09.433724 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" event={"ID":"0d5edebb-e2fd-4744-b994-2559c10c9947","Type":"ContainerStarted","Data":"d88e47273c0d6b5d81dc1cb1402c69f89bc68cb2c4b1b7e9f137e3cfb8a1ec81"} Nov 25 20:14:09 crc kubenswrapper[4775]: I1125 20:14:09.434158 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" event={"ID":"0d5edebb-e2fd-4744-b994-2559c10c9947","Type":"ContainerStarted","Data":"ccb0009760c098431704d7c8091fe78d09b92962622c16350027424f97ea83c0"} Nov 25 20:14:09 crc kubenswrapper[4775]: I1125 20:14:09.460468 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" podStartSLOduration=2.049495312 podStartE2EDuration="2.460450651s" podCreationTimestamp="2025-11-25 20:14:07 +0000 UTC" firstStartedPulling="2025-11-25 20:14:08.450280089 +0000 UTC m=+2430.366642475" lastFinishedPulling="2025-11-25 20:14:08.861235408 +0000 UTC m=+2430.777597814" observedRunningTime="2025-11-25 20:14:09.456115574 +0000 UTC m=+2431.372477940" watchObservedRunningTime="2025-11-25 20:14:09.460450651 +0000 UTC m=+2431.376813017" Nov 25 20:14:22 crc kubenswrapper[4775]: I1125 20:14:22.847792 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:14:22 crc kubenswrapper[4775]: E1125 20:14:22.848558 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:14:33 crc kubenswrapper[4775]: I1125 20:14:33.851834 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:14:33 crc kubenswrapper[4775]: E1125 20:14:33.853084 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:14:44 crc kubenswrapper[4775]: I1125 20:14:44.848181 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:14:44 crc kubenswrapper[4775]: E1125 20:14:44.849306 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:14:56 crc kubenswrapper[4775]: I1125 20:14:56.846943 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:14:56 crc kubenswrapper[4775]: E1125 20:14:56.847952 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.174375 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq"] Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.179034 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.182819 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.183256 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.186987 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq"] Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.232242 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2212fe2-02c7-4803-a221-c05221f0317f-config-volume\") pod \"collect-profiles-29401695-tzsgq\" (UID: \"d2212fe2-02c7-4803-a221-c05221f0317f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.232587 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2212fe2-02c7-4803-a221-c05221f0317f-secret-volume\") pod \"collect-profiles-29401695-tzsgq\" (UID: \"d2212fe2-02c7-4803-a221-c05221f0317f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.232704 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmgzq\" (UniqueName: \"kubernetes.io/projected/d2212fe2-02c7-4803-a221-c05221f0317f-kube-api-access-jmgzq\") pod \"collect-profiles-29401695-tzsgq\" (UID: \"d2212fe2-02c7-4803-a221-c05221f0317f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.333361 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2212fe2-02c7-4803-a221-c05221f0317f-config-volume\") pod \"collect-profiles-29401695-tzsgq\" (UID: \"d2212fe2-02c7-4803-a221-c05221f0317f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.333461 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2212fe2-02c7-4803-a221-c05221f0317f-secret-volume\") pod \"collect-profiles-29401695-tzsgq\" (UID: \"d2212fe2-02c7-4803-a221-c05221f0317f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.333537 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmgzq\" (UniqueName: \"kubernetes.io/projected/d2212fe2-02c7-4803-a221-c05221f0317f-kube-api-access-jmgzq\") pod \"collect-profiles-29401695-tzsgq\" (UID: \"d2212fe2-02c7-4803-a221-c05221f0317f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.335172 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2212fe2-02c7-4803-a221-c05221f0317f-config-volume\") pod \"collect-profiles-29401695-tzsgq\" (UID: \"d2212fe2-02c7-4803-a221-c05221f0317f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.355458 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2212fe2-02c7-4803-a221-c05221f0317f-secret-volume\") pod \"collect-profiles-29401695-tzsgq\" (UID: \"d2212fe2-02c7-4803-a221-c05221f0317f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.364465 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmgzq\" (UniqueName: \"kubernetes.io/projected/d2212fe2-02c7-4803-a221-c05221f0317f-kube-api-access-jmgzq\") pod \"collect-profiles-29401695-tzsgq\" (UID: \"d2212fe2-02c7-4803-a221-c05221f0317f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.511044 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" Nov 25 20:15:00 crc kubenswrapper[4775]: I1125 20:15:00.838503 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq"] Nov 25 20:15:01 crc kubenswrapper[4775]: I1125 20:15:01.008028 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" event={"ID":"d2212fe2-02c7-4803-a221-c05221f0317f","Type":"ContainerStarted","Data":"2f9d9563a64ca2c7dfe8e30a6a24135a33bcb607956f961fe2963c317d623c95"} Nov 25 20:15:01 crc kubenswrapper[4775]: E1125 20:15:01.584622 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2212fe2_02c7_4803_a221_c05221f0317f.slice/crio-conmon-56c1a980f65ee01529c1686e53fb7c8ed8f03620efdab583bff8d81c597ca89b.scope\": RecentStats: unable to find data in memory cache]" Nov 25 20:15:02 crc kubenswrapper[4775]: I1125 20:15:02.021058 4775 generic.go:334] "Generic (PLEG): container finished" podID="d2212fe2-02c7-4803-a221-c05221f0317f" containerID="56c1a980f65ee01529c1686e53fb7c8ed8f03620efdab583bff8d81c597ca89b" exitCode=0 Nov 25 20:15:02 crc kubenswrapper[4775]: I1125 20:15:02.021126 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" event={"ID":"d2212fe2-02c7-4803-a221-c05221f0317f","Type":"ContainerDied","Data":"56c1a980f65ee01529c1686e53fb7c8ed8f03620efdab583bff8d81c597ca89b"} Nov 25 20:15:03 crc kubenswrapper[4775]: I1125 20:15:03.429584 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" Nov 25 20:15:03 crc kubenswrapper[4775]: I1125 20:15:03.509759 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2212fe2-02c7-4803-a221-c05221f0317f-secret-volume\") pod \"d2212fe2-02c7-4803-a221-c05221f0317f\" (UID: \"d2212fe2-02c7-4803-a221-c05221f0317f\") " Nov 25 20:15:03 crc kubenswrapper[4775]: I1125 20:15:03.509929 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmgzq\" (UniqueName: \"kubernetes.io/projected/d2212fe2-02c7-4803-a221-c05221f0317f-kube-api-access-jmgzq\") pod \"d2212fe2-02c7-4803-a221-c05221f0317f\" (UID: \"d2212fe2-02c7-4803-a221-c05221f0317f\") " Nov 25 20:15:03 crc kubenswrapper[4775]: I1125 20:15:03.510174 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2212fe2-02c7-4803-a221-c05221f0317f-config-volume\") pod \"d2212fe2-02c7-4803-a221-c05221f0317f\" (UID: \"d2212fe2-02c7-4803-a221-c05221f0317f\") " Nov 25 20:15:03 crc kubenswrapper[4775]: I1125 20:15:03.511404 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2212fe2-02c7-4803-a221-c05221f0317f-config-volume" (OuterVolumeSpecName: "config-volume") pod "d2212fe2-02c7-4803-a221-c05221f0317f" (UID: "d2212fe2-02c7-4803-a221-c05221f0317f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:15:03 crc kubenswrapper[4775]: I1125 20:15:03.517139 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2212fe2-02c7-4803-a221-c05221f0317f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d2212fe2-02c7-4803-a221-c05221f0317f" (UID: "d2212fe2-02c7-4803-a221-c05221f0317f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:15:03 crc kubenswrapper[4775]: I1125 20:15:03.517180 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2212fe2-02c7-4803-a221-c05221f0317f-kube-api-access-jmgzq" (OuterVolumeSpecName: "kube-api-access-jmgzq") pod "d2212fe2-02c7-4803-a221-c05221f0317f" (UID: "d2212fe2-02c7-4803-a221-c05221f0317f"). InnerVolumeSpecName "kube-api-access-jmgzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:15:03 crc kubenswrapper[4775]: I1125 20:15:03.612692 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2212fe2-02c7-4803-a221-c05221f0317f-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:15:03 crc kubenswrapper[4775]: I1125 20:15:03.612742 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2212fe2-02c7-4803-a221-c05221f0317f-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:15:03 crc kubenswrapper[4775]: I1125 20:15:03.612764 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmgzq\" (UniqueName: \"kubernetes.io/projected/d2212fe2-02c7-4803-a221-c05221f0317f-kube-api-access-jmgzq\") on node \"crc\" DevicePath \"\"" Nov 25 20:15:04 crc kubenswrapper[4775]: I1125 20:15:04.050272 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" event={"ID":"d2212fe2-02c7-4803-a221-c05221f0317f","Type":"ContainerDied","Data":"2f9d9563a64ca2c7dfe8e30a6a24135a33bcb607956f961fe2963c317d623c95"} Nov 25 20:15:04 crc kubenswrapper[4775]: I1125 20:15:04.050358 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f9d9563a64ca2c7dfe8e30a6a24135a33bcb607956f961fe2963c317d623c95" Nov 25 20:15:04 crc kubenswrapper[4775]: I1125 20:15:04.050390 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq" Nov 25 20:15:04 crc kubenswrapper[4775]: I1125 20:15:04.554898 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw"] Nov 25 20:15:04 crc kubenswrapper[4775]: I1125 20:15:04.568906 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401650-h6dnw"] Nov 25 20:15:04 crc kubenswrapper[4775]: I1125 20:15:04.861447 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54dca2d8-8976-4d24-b97a-a9e867d0d74b" path="/var/lib/kubelet/pods/54dca2d8-8976-4d24-b97a-a9e867d0d74b/volumes" Nov 25 20:15:07 crc kubenswrapper[4775]: I1125 20:15:07.848044 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:15:07 crc kubenswrapper[4775]: E1125 20:15:07.848954 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:15:18 crc kubenswrapper[4775]: I1125 20:15:18.958868 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rfbnn"] Nov 25 20:15:18 crc kubenswrapper[4775]: E1125 20:15:18.960374 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2212fe2-02c7-4803-a221-c05221f0317f" containerName="collect-profiles" Nov 25 20:15:18 crc kubenswrapper[4775]: I1125 20:15:18.960404 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2212fe2-02c7-4803-a221-c05221f0317f" containerName="collect-profiles" Nov 25 20:15:18 crc kubenswrapper[4775]: I1125 20:15:18.960863 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2212fe2-02c7-4803-a221-c05221f0317f" containerName="collect-profiles" Nov 25 20:15:18 crc kubenswrapper[4775]: I1125 20:15:18.963998 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:18 crc kubenswrapper[4775]: I1125 20:15:18.973005 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rfbnn"] Nov 25 20:15:19 crc kubenswrapper[4775]: I1125 20:15:19.145192 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddc2d\" (UniqueName: \"kubernetes.io/projected/7e6aa545-3cae-4c45-93c5-18d8f477376a-kube-api-access-ddc2d\") pod \"redhat-operators-rfbnn\" (UID: \"7e6aa545-3cae-4c45-93c5-18d8f477376a\") " pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:19 crc kubenswrapper[4775]: I1125 20:15:19.145315 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6aa545-3cae-4c45-93c5-18d8f477376a-catalog-content\") pod \"redhat-operators-rfbnn\" (UID: \"7e6aa545-3cae-4c45-93c5-18d8f477376a\") " pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:19 crc kubenswrapper[4775]: I1125 20:15:19.145342 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6aa545-3cae-4c45-93c5-18d8f477376a-utilities\") pod \"redhat-operators-rfbnn\" (UID: \"7e6aa545-3cae-4c45-93c5-18d8f477376a\") " pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:19 crc kubenswrapper[4775]: I1125 20:15:19.247471 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6aa545-3cae-4c45-93c5-18d8f477376a-catalog-content\") pod \"redhat-operators-rfbnn\" (UID: \"7e6aa545-3cae-4c45-93c5-18d8f477376a\") " pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:19 crc kubenswrapper[4775]: I1125 20:15:19.247515 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6aa545-3cae-4c45-93c5-18d8f477376a-utilities\") pod \"redhat-operators-rfbnn\" (UID: \"7e6aa545-3cae-4c45-93c5-18d8f477376a\") " pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:19 crc kubenswrapper[4775]: I1125 20:15:19.247618 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddc2d\" (UniqueName: \"kubernetes.io/projected/7e6aa545-3cae-4c45-93c5-18d8f477376a-kube-api-access-ddc2d\") pod \"redhat-operators-rfbnn\" (UID: \"7e6aa545-3cae-4c45-93c5-18d8f477376a\") " pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:19 crc kubenswrapper[4775]: I1125 20:15:19.248177 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6aa545-3cae-4c45-93c5-18d8f477376a-catalog-content\") pod \"redhat-operators-rfbnn\" (UID: \"7e6aa545-3cae-4c45-93c5-18d8f477376a\") " pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:19 crc kubenswrapper[4775]: I1125 20:15:19.248294 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6aa545-3cae-4c45-93c5-18d8f477376a-utilities\") pod \"redhat-operators-rfbnn\" (UID: \"7e6aa545-3cae-4c45-93c5-18d8f477376a\") " pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:19 crc kubenswrapper[4775]: I1125 20:15:19.268409 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddc2d\" (UniqueName: \"kubernetes.io/projected/7e6aa545-3cae-4c45-93c5-18d8f477376a-kube-api-access-ddc2d\") pod \"redhat-operators-rfbnn\" (UID: \"7e6aa545-3cae-4c45-93c5-18d8f477376a\") " pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:19 crc kubenswrapper[4775]: I1125 20:15:19.283759 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:19 crc kubenswrapper[4775]: I1125 20:15:19.803564 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rfbnn"] Nov 25 20:15:20 crc kubenswrapper[4775]: I1125 20:15:20.300268 4775 generic.go:334] "Generic (PLEG): container finished" podID="7e6aa545-3cae-4c45-93c5-18d8f477376a" containerID="0ab1a6f6562fb2cc11710e9c3f83af7632002f9cb1aa111760233c511c86c7c7" exitCode=0 Nov 25 20:15:20 crc kubenswrapper[4775]: I1125 20:15:20.300317 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfbnn" event={"ID":"7e6aa545-3cae-4c45-93c5-18d8f477376a","Type":"ContainerDied","Data":"0ab1a6f6562fb2cc11710e9c3f83af7632002f9cb1aa111760233c511c86c7c7"} Nov 25 20:15:20 crc kubenswrapper[4775]: I1125 20:15:20.300344 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfbnn" event={"ID":"7e6aa545-3cae-4c45-93c5-18d8f477376a","Type":"ContainerStarted","Data":"1f6e06ed8041e4ddb2ca6ea48843965627d8053ba2f4ce04f8d05aef20e4bb70"} Nov 25 20:15:21 crc kubenswrapper[4775]: W1125 20:15:21.834294 4775 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e6aa545_3cae_4c45_93c5_18d8f477376a.slice/crio-4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed.scope/pids.max": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e6aa545_3cae_4c45_93c5_18d8f477376a.slice/crio-4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed.scope/pids.max: no such device Nov 25 20:15:22 crc kubenswrapper[4775]: E1125 20:15:22.028325 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e6aa545_3cae_4c45_93c5_18d8f477376a.slice/crio-4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed.scope\": RecentStats: unable to find data in memory cache]" Nov 25 20:15:22 crc kubenswrapper[4775]: I1125 20:15:22.332080 4775 generic.go:334] "Generic (PLEG): container finished" podID="7e6aa545-3cae-4c45-93c5-18d8f477376a" containerID="4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed" exitCode=0 Nov 25 20:15:22 crc kubenswrapper[4775]: I1125 20:15:22.332178 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfbnn" event={"ID":"7e6aa545-3cae-4c45-93c5-18d8f477376a","Type":"ContainerDied","Data":"4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed"} Nov 25 20:15:22 crc kubenswrapper[4775]: I1125 20:15:22.848933 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:15:22 crc kubenswrapper[4775]: E1125 20:15:22.849607 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:15:23 crc kubenswrapper[4775]: I1125 20:15:23.347542 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfbnn" event={"ID":"7e6aa545-3cae-4c45-93c5-18d8f477376a","Type":"ContainerStarted","Data":"ef4702e2746afb0aeca5dde1499746106df909e089635ab08757d26030058c49"} Nov 25 20:15:23 crc kubenswrapper[4775]: I1125 20:15:23.380848 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rfbnn" podStartSLOduration=2.916819977 podStartE2EDuration="5.380818039s" podCreationTimestamp="2025-11-25 20:15:18 +0000 UTC" firstStartedPulling="2025-11-25 20:15:20.302056243 +0000 UTC m=+2502.218418609" lastFinishedPulling="2025-11-25 20:15:22.766054265 +0000 UTC m=+2504.682416671" observedRunningTime="2025-11-25 20:15:23.370247353 +0000 UTC m=+2505.286609759" watchObservedRunningTime="2025-11-25 20:15:23.380818039 +0000 UTC m=+2505.297180445" Nov 25 20:15:29 crc kubenswrapper[4775]: I1125 20:15:29.285015 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:29 crc kubenswrapper[4775]: I1125 20:15:29.285791 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:30 crc kubenswrapper[4775]: I1125 20:15:30.348134 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rfbnn" podUID="7e6aa545-3cae-4c45-93c5-18d8f477376a" containerName="registry-server" probeResult="failure" output=< Nov 25 20:15:30 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Nov 25 20:15:30 crc kubenswrapper[4775]: > Nov 25 20:15:31 crc kubenswrapper[4775]: I1125 20:15:31.433005 4775 generic.go:334] "Generic (PLEG): container finished" podID="0d5edebb-e2fd-4744-b994-2559c10c9947" containerID="d88e47273c0d6b5d81dc1cb1402c69f89bc68cb2c4b1b7e9f137e3cfb8a1ec81" exitCode=0 Nov 25 20:15:31 crc kubenswrapper[4775]: I1125 20:15:31.433093 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" event={"ID":"0d5edebb-e2fd-4744-b994-2559c10c9947","Type":"ContainerDied","Data":"d88e47273c0d6b5d81dc1cb1402c69f89bc68cb2c4b1b7e9f137e3cfb8a1ec81"} Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.037739 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.172189 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ceph\") pod \"0d5edebb-e2fd-4744-b994-2559c10c9947\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.172266 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ovn-combined-ca-bundle\") pod \"0d5edebb-e2fd-4744-b994-2559c10c9947\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.172349 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-inventory\") pod \"0d5edebb-e2fd-4744-b994-2559c10c9947\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.172415 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ssh-key\") pod \"0d5edebb-e2fd-4744-b994-2559c10c9947\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.172494 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr4rq\" (UniqueName: \"kubernetes.io/projected/0d5edebb-e2fd-4744-b994-2559c10c9947-kube-api-access-qr4rq\") pod \"0d5edebb-e2fd-4744-b994-2559c10c9947\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.172562 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0d5edebb-e2fd-4744-b994-2559c10c9947-ovncontroller-config-0\") pod \"0d5edebb-e2fd-4744-b994-2559c10c9947\" (UID: \"0d5edebb-e2fd-4744-b994-2559c10c9947\") " Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.180843 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0d5edebb-e2fd-4744-b994-2559c10c9947" (UID: "0d5edebb-e2fd-4744-b994-2559c10c9947"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.181547 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d5edebb-e2fd-4744-b994-2559c10c9947-kube-api-access-qr4rq" (OuterVolumeSpecName: "kube-api-access-qr4rq") pod "0d5edebb-e2fd-4744-b994-2559c10c9947" (UID: "0d5edebb-e2fd-4744-b994-2559c10c9947"). InnerVolumeSpecName "kube-api-access-qr4rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.182957 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ceph" (OuterVolumeSpecName: "ceph") pod "0d5edebb-e2fd-4744-b994-2559c10c9947" (UID: "0d5edebb-e2fd-4744-b994-2559c10c9947"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.217428 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0d5edebb-e2fd-4744-b994-2559c10c9947" (UID: "0d5edebb-e2fd-4744-b994-2559c10c9947"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.218086 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-inventory" (OuterVolumeSpecName: "inventory") pod "0d5edebb-e2fd-4744-b994-2559c10c9947" (UID: "0d5edebb-e2fd-4744-b994-2559c10c9947"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.225863 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d5edebb-e2fd-4744-b994-2559c10c9947-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0d5edebb-e2fd-4744-b994-2559c10c9947" (UID: "0d5edebb-e2fd-4744-b994-2559c10c9947"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.276047 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.276134 4775 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.276165 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.276188 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d5edebb-e2fd-4744-b994-2559c10c9947-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.276215 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr4rq\" (UniqueName: \"kubernetes.io/projected/0d5edebb-e2fd-4744-b994-2559c10c9947-kube-api-access-qr4rq\") on node \"crc\" DevicePath \"\"" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.276239 4775 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0d5edebb-e2fd-4744-b994-2559c10c9947-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.457491 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" event={"ID":"0d5edebb-e2fd-4744-b994-2559c10c9947","Type":"ContainerDied","Data":"ccb0009760c098431704d7c8091fe78d09b92962622c16350027424f97ea83c0"} Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.457874 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccb0009760c098431704d7c8091fe78d09b92962622c16350027424f97ea83c0" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.457539 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slq8g" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.564416 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh"] Nov 25 20:15:33 crc kubenswrapper[4775]: E1125 20:15:33.564891 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5edebb-e2fd-4744-b994-2559c10c9947" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.564914 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5edebb-e2fd-4744-b994-2559c10c9947" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.565132 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5edebb-e2fd-4744-b994-2559c10c9947" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.565852 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.571834 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.571941 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.572070 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.572220 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.572286 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.572401 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.572430 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.577826 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh"] Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.685145 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.685242 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.685408 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.685696 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.685829 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwdw8\" (UniqueName: \"kubernetes.io/projected/79243ac0-3276-49bd-a57d-f0c4f9458add-kube-api-access-wwdw8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.686162 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.686242 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.788889 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.789012 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.789134 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.789222 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.789286 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.789405 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.789465 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwdw8\" (UniqueName: \"kubernetes.io/projected/79243ac0-3276-49bd-a57d-f0c4f9458add-kube-api-access-wwdw8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.794265 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.795168 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.795506 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.796242 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.796511 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.797085 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.811612 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwdw8\" (UniqueName: \"kubernetes.io/projected/79243ac0-3276-49bd-a57d-f0c4f9458add-kube-api-access-wwdw8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:33 crc kubenswrapper[4775]: I1125 20:15:33.894464 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:15:34 crc kubenswrapper[4775]: W1125 20:15:34.556035 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79243ac0_3276_49bd_a57d_f0c4f9458add.slice/crio-fdb3160005dc03c3eb35465fdfec2962ee27582e422724b9bf4c57383c7c68a6 WatchSource:0}: Error finding container fdb3160005dc03c3eb35465fdfec2962ee27582e422724b9bf4c57383c7c68a6: Status 404 returned error can't find the container with id fdb3160005dc03c3eb35465fdfec2962ee27582e422724b9bf4c57383c7c68a6 Nov 25 20:15:34 crc kubenswrapper[4775]: I1125 20:15:34.556853 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh"] Nov 25 20:15:35 crc kubenswrapper[4775]: I1125 20:15:35.484052 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" event={"ID":"79243ac0-3276-49bd-a57d-f0c4f9458add","Type":"ContainerStarted","Data":"5e19ec62997f961a8d3b686d98af7565be09b75581737c5bd0f220059b76f7f5"} Nov 25 20:15:35 crc kubenswrapper[4775]: I1125 20:15:35.484530 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" event={"ID":"79243ac0-3276-49bd-a57d-f0c4f9458add","Type":"ContainerStarted","Data":"fdb3160005dc03c3eb35465fdfec2962ee27582e422724b9bf4c57383c7c68a6"} Nov 25 20:15:35 crc kubenswrapper[4775]: I1125 20:15:35.521930 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" podStartSLOduration=1.9987898579999999 podStartE2EDuration="2.521903899s" podCreationTimestamp="2025-11-25 20:15:33 +0000 UTC" firstStartedPulling="2025-11-25 20:15:34.558068891 +0000 UTC m=+2516.474431277" lastFinishedPulling="2025-11-25 20:15:35.081182922 +0000 UTC m=+2516.997545318" observedRunningTime="2025-11-25 20:15:35.511103905 +0000 UTC m=+2517.427466311" watchObservedRunningTime="2025-11-25 20:15:35.521903899 +0000 UTC m=+2517.438266305" Nov 25 20:15:37 crc kubenswrapper[4775]: I1125 20:15:37.847220 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:15:37 crc kubenswrapper[4775]: E1125 20:15:37.849422 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:15:39 crc kubenswrapper[4775]: I1125 20:15:39.353144 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:39 crc kubenswrapper[4775]: I1125 20:15:39.422591 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:39 crc kubenswrapper[4775]: I1125 20:15:39.603014 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rfbnn"] Nov 25 20:15:40 crc kubenswrapper[4775]: I1125 20:15:40.532515 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rfbnn" podUID="7e6aa545-3cae-4c45-93c5-18d8f477376a" containerName="registry-server" containerID="cri-o://ef4702e2746afb0aeca5dde1499746106df909e089635ab08757d26030058c49" gracePeriod=2 Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.021705 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.157247 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6aa545-3cae-4c45-93c5-18d8f477376a-utilities\") pod \"7e6aa545-3cae-4c45-93c5-18d8f477376a\" (UID: \"7e6aa545-3cae-4c45-93c5-18d8f477376a\") " Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.157465 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6aa545-3cae-4c45-93c5-18d8f477376a-catalog-content\") pod \"7e6aa545-3cae-4c45-93c5-18d8f477376a\" (UID: \"7e6aa545-3cae-4c45-93c5-18d8f477376a\") " Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.157525 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddc2d\" (UniqueName: \"kubernetes.io/projected/7e6aa545-3cae-4c45-93c5-18d8f477376a-kube-api-access-ddc2d\") pod \"7e6aa545-3cae-4c45-93c5-18d8f477376a\" (UID: \"7e6aa545-3cae-4c45-93c5-18d8f477376a\") " Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.158344 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6aa545-3cae-4c45-93c5-18d8f477376a-utilities" (OuterVolumeSpecName: "utilities") pod "7e6aa545-3cae-4c45-93c5-18d8f477376a" (UID: "7e6aa545-3cae-4c45-93c5-18d8f477376a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.163708 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6aa545-3cae-4c45-93c5-18d8f477376a-kube-api-access-ddc2d" (OuterVolumeSpecName: "kube-api-access-ddc2d") pod "7e6aa545-3cae-4c45-93c5-18d8f477376a" (UID: "7e6aa545-3cae-4c45-93c5-18d8f477376a"). InnerVolumeSpecName "kube-api-access-ddc2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.260934 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddc2d\" (UniqueName: \"kubernetes.io/projected/7e6aa545-3cae-4c45-93c5-18d8f477376a-kube-api-access-ddc2d\") on node \"crc\" DevicePath \"\"" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.260984 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6aa545-3cae-4c45-93c5-18d8f477376a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.277763 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6aa545-3cae-4c45-93c5-18d8f477376a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e6aa545-3cae-4c45-93c5-18d8f477376a" (UID: "7e6aa545-3cae-4c45-93c5-18d8f477376a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.362432 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6aa545-3cae-4c45-93c5-18d8f477376a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.544112 4775 generic.go:334] "Generic (PLEG): container finished" podID="7e6aa545-3cae-4c45-93c5-18d8f477376a" containerID="ef4702e2746afb0aeca5dde1499746106df909e089635ab08757d26030058c49" exitCode=0 Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.544266 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rfbnn" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.544281 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfbnn" event={"ID":"7e6aa545-3cae-4c45-93c5-18d8f477376a","Type":"ContainerDied","Data":"ef4702e2746afb0aeca5dde1499746106df909e089635ab08757d26030058c49"} Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.549004 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfbnn" event={"ID":"7e6aa545-3cae-4c45-93c5-18d8f477376a","Type":"ContainerDied","Data":"1f6e06ed8041e4ddb2ca6ea48843965627d8053ba2f4ce04f8d05aef20e4bb70"} Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.549048 4775 scope.go:117] "RemoveContainer" containerID="ef4702e2746afb0aeca5dde1499746106df909e089635ab08757d26030058c49" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.584268 4775 scope.go:117] "RemoveContainer" containerID="4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.608772 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rfbnn"] Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.622726 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rfbnn"] Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.624328 4775 scope.go:117] "RemoveContainer" containerID="0ab1a6f6562fb2cc11710e9c3f83af7632002f9cb1aa111760233c511c86c7c7" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.696447 4775 scope.go:117] "RemoveContainer" containerID="ef4702e2746afb0aeca5dde1499746106df909e089635ab08757d26030058c49" Nov 25 20:15:41 crc kubenswrapper[4775]: E1125 20:15:41.697503 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef4702e2746afb0aeca5dde1499746106df909e089635ab08757d26030058c49\": container with ID starting with ef4702e2746afb0aeca5dde1499746106df909e089635ab08757d26030058c49 not found: ID does not exist" containerID="ef4702e2746afb0aeca5dde1499746106df909e089635ab08757d26030058c49" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.697581 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4702e2746afb0aeca5dde1499746106df909e089635ab08757d26030058c49"} err="failed to get container status \"ef4702e2746afb0aeca5dde1499746106df909e089635ab08757d26030058c49\": rpc error: code = NotFound desc = could not find container \"ef4702e2746afb0aeca5dde1499746106df909e089635ab08757d26030058c49\": container with ID starting with ef4702e2746afb0aeca5dde1499746106df909e089635ab08757d26030058c49 not found: ID does not exist" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.697621 4775 scope.go:117] "RemoveContainer" containerID="4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed" Nov 25 20:15:41 crc kubenswrapper[4775]: E1125 20:15:41.699091 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed\": container with ID starting with 4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed not found: ID does not exist" containerID="4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.699124 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed"} err="failed to get container status \"4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed\": rpc error: code = NotFound desc = could not find container \"4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed\": container with ID starting with 4135db71f6bf876d84f51c64cf816b7c93a0c8e3cd2cbf581a7b4391306453ed not found: ID does not exist" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.699143 4775 scope.go:117] "RemoveContainer" containerID="0ab1a6f6562fb2cc11710e9c3f83af7632002f9cb1aa111760233c511c86c7c7" Nov 25 20:15:41 crc kubenswrapper[4775]: E1125 20:15:41.699708 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ab1a6f6562fb2cc11710e9c3f83af7632002f9cb1aa111760233c511c86c7c7\": container with ID starting with 0ab1a6f6562fb2cc11710e9c3f83af7632002f9cb1aa111760233c511c86c7c7 not found: ID does not exist" containerID="0ab1a6f6562fb2cc11710e9c3f83af7632002f9cb1aa111760233c511c86c7c7" Nov 25 20:15:41 crc kubenswrapper[4775]: I1125 20:15:41.699760 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab1a6f6562fb2cc11710e9c3f83af7632002f9cb1aa111760233c511c86c7c7"} err="failed to get container status \"0ab1a6f6562fb2cc11710e9c3f83af7632002f9cb1aa111760233c511c86c7c7\": rpc error: code = NotFound desc = could not find container \"0ab1a6f6562fb2cc11710e9c3f83af7632002f9cb1aa111760233c511c86c7c7\": container with ID starting with 0ab1a6f6562fb2cc11710e9c3f83af7632002f9cb1aa111760233c511c86c7c7 not found: ID does not exist" Nov 25 20:15:42 crc kubenswrapper[4775]: I1125 20:15:42.861691 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6aa545-3cae-4c45-93c5-18d8f477376a" path="/var/lib/kubelet/pods/7e6aa545-3cae-4c45-93c5-18d8f477376a/volumes" Nov 25 20:15:50 crc kubenswrapper[4775]: I1125 20:15:50.846929 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:15:50 crc kubenswrapper[4775]: E1125 20:15:50.848011 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:15:51 crc kubenswrapper[4775]: I1125 20:15:51.624779 4775 scope.go:117] "RemoveContainer" containerID="9a91074107a17fcce56a70d39828d5158e5bd69aad20fac31d3f42d29a5adfed" Nov 25 20:16:01 crc kubenswrapper[4775]: I1125 20:16:01.847415 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:16:01 crc kubenswrapper[4775]: E1125 20:16:01.848541 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:16:15 crc kubenswrapper[4775]: I1125 20:16:15.848438 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:16:16 crc kubenswrapper[4775]: I1125 20:16:16.989784 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"104e8abf589fc59108ee29bedd8d7d01d4812635de9a1b845be0daeeb1ab5598"} Nov 25 20:16:48 crc kubenswrapper[4775]: I1125 20:16:48.654897 4775 generic.go:334] "Generic (PLEG): container finished" podID="79243ac0-3276-49bd-a57d-f0c4f9458add" containerID="5e19ec62997f961a8d3b686d98af7565be09b75581737c5bd0f220059b76f7f5" exitCode=0 Nov 25 20:16:48 crc kubenswrapper[4775]: I1125 20:16:48.654968 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" event={"ID":"79243ac0-3276-49bd-a57d-f0c4f9458add","Type":"ContainerDied","Data":"5e19ec62997f961a8d3b686d98af7565be09b75581737c5bd0f220059b76f7f5"} Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.165251 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.346735 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-inventory\") pod \"79243ac0-3276-49bd-a57d-f0c4f9458add\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.346804 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-ceph\") pod \"79243ac0-3276-49bd-a57d-f0c4f9458add\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.346863 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwdw8\" (UniqueName: \"kubernetes.io/projected/79243ac0-3276-49bd-a57d-f0c4f9458add-kube-api-access-wwdw8\") pod \"79243ac0-3276-49bd-a57d-f0c4f9458add\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.346933 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-nova-metadata-neutron-config-0\") pod \"79243ac0-3276-49bd-a57d-f0c4f9458add\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.347059 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-ssh-key\") pod \"79243ac0-3276-49bd-a57d-f0c4f9458add\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.347104 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-neutron-metadata-combined-ca-bundle\") pod \"79243ac0-3276-49bd-a57d-f0c4f9458add\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.347186 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-neutron-ovn-metadata-agent-neutron-config-0\") pod \"79243ac0-3276-49bd-a57d-f0c4f9458add\" (UID: \"79243ac0-3276-49bd-a57d-f0c4f9458add\") " Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.354592 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "79243ac0-3276-49bd-a57d-f0c4f9458add" (UID: "79243ac0-3276-49bd-a57d-f0c4f9458add"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.354850 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79243ac0-3276-49bd-a57d-f0c4f9458add-kube-api-access-wwdw8" (OuterVolumeSpecName: "kube-api-access-wwdw8") pod "79243ac0-3276-49bd-a57d-f0c4f9458add" (UID: "79243ac0-3276-49bd-a57d-f0c4f9458add"). InnerVolumeSpecName "kube-api-access-wwdw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.357823 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-ceph" (OuterVolumeSpecName: "ceph") pod "79243ac0-3276-49bd-a57d-f0c4f9458add" (UID: "79243ac0-3276-49bd-a57d-f0c4f9458add"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.383619 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "79243ac0-3276-49bd-a57d-f0c4f9458add" (UID: "79243ac0-3276-49bd-a57d-f0c4f9458add"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.387229 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-inventory" (OuterVolumeSpecName: "inventory") pod "79243ac0-3276-49bd-a57d-f0c4f9458add" (UID: "79243ac0-3276-49bd-a57d-f0c4f9458add"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.399035 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "79243ac0-3276-49bd-a57d-f0c4f9458add" (UID: "79243ac0-3276-49bd-a57d-f0c4f9458add"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.400486 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "79243ac0-3276-49bd-a57d-f0c4f9458add" (UID: "79243ac0-3276-49bd-a57d-f0c4f9458add"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.450186 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.450240 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.450252 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwdw8\" (UniqueName: \"kubernetes.io/projected/79243ac0-3276-49bd-a57d-f0c4f9458add-kube-api-access-wwdw8\") on node \"crc\" DevicePath \"\"" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.450265 4775 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.450277 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.450291 4775 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.450304 4775 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79243ac0-3276-49bd-a57d-f0c4f9458add-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.680317 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" event={"ID":"79243ac0-3276-49bd-a57d-f0c4f9458add","Type":"ContainerDied","Data":"fdb3160005dc03c3eb35465fdfec2962ee27582e422724b9bf4c57383c7c68a6"} Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.680397 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdb3160005dc03c3eb35465fdfec2962ee27582e422724b9bf4c57383c7c68a6" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.680418 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.958745 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln"] Nov 25 20:16:50 crc kubenswrapper[4775]: E1125 20:16:50.959156 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6aa545-3cae-4c45-93c5-18d8f477376a" containerName="extract-utilities" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.959179 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6aa545-3cae-4c45-93c5-18d8f477376a" containerName="extract-utilities" Nov 25 20:16:50 crc kubenswrapper[4775]: E1125 20:16:50.959199 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79243ac0-3276-49bd-a57d-f0c4f9458add" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.959210 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="79243ac0-3276-49bd-a57d-f0c4f9458add" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 20:16:50 crc kubenswrapper[4775]: E1125 20:16:50.959223 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6aa545-3cae-4c45-93c5-18d8f477376a" containerName="extract-content" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.959230 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6aa545-3cae-4c45-93c5-18d8f477376a" containerName="extract-content" Nov 25 20:16:50 crc kubenswrapper[4775]: E1125 20:16:50.959253 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6aa545-3cae-4c45-93c5-18d8f477376a" containerName="registry-server" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.959260 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6aa545-3cae-4c45-93c5-18d8f477376a" containerName="registry-server" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.959502 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6aa545-3cae-4c45-93c5-18d8f477376a" containerName="registry-server" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.959536 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="79243ac0-3276-49bd-a57d-f0c4f9458add" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.960247 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.964662 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.964682 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.964802 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.965602 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.966962 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.967166 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:16:50 crc kubenswrapper[4775]: I1125 20:16:50.981703 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln"] Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.062375 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.062722 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmbvr\" (UniqueName: \"kubernetes.io/projected/c7dcb097-31de-4297-9671-ac2644323c39-kube-api-access-tmbvr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.062751 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.063001 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.063108 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.063163 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.166051 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.166171 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.166237 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.166403 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.166528 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmbvr\" (UniqueName: \"kubernetes.io/projected/c7dcb097-31de-4297-9671-ac2644323c39-kube-api-access-tmbvr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.166580 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.171453 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.171472 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.171517 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.172170 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.186340 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.187659 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmbvr\" (UniqueName: \"kubernetes.io/projected/c7dcb097-31de-4297-9671-ac2644323c39-kube-api-access-tmbvr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8xqln\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.289912 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:16:51 crc kubenswrapper[4775]: I1125 20:16:51.682265 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln"] Nov 25 20:16:51 crc kubenswrapper[4775]: W1125 20:16:51.684449 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7dcb097_31de_4297_9671_ac2644323c39.slice/crio-874045b5b9d6ebdfa3b6f8dd4835b22033accc41ca80ce215129b068859c0fb3 WatchSource:0}: Error finding container 874045b5b9d6ebdfa3b6f8dd4835b22033accc41ca80ce215129b068859c0fb3: Status 404 returned error can't find the container with id 874045b5b9d6ebdfa3b6f8dd4835b22033accc41ca80ce215129b068859c0fb3 Nov 25 20:16:52 crc kubenswrapper[4775]: I1125 20:16:52.705581 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" event={"ID":"c7dcb097-31de-4297-9671-ac2644323c39","Type":"ContainerStarted","Data":"c08c9b80b126347375f806fbd1762bc6e046254a030424537762f5b936462830"} Nov 25 20:16:52 crc kubenswrapper[4775]: I1125 20:16:52.705936 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" event={"ID":"c7dcb097-31de-4297-9671-ac2644323c39","Type":"ContainerStarted","Data":"874045b5b9d6ebdfa3b6f8dd4835b22033accc41ca80ce215129b068859c0fb3"} Nov 25 20:16:52 crc kubenswrapper[4775]: I1125 20:16:52.731329 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" podStartSLOduration=2.247460932 podStartE2EDuration="2.731308178s" podCreationTimestamp="2025-11-25 20:16:50 +0000 UTC" firstStartedPulling="2025-11-25 20:16:51.701794912 +0000 UTC m=+2593.618157278" lastFinishedPulling="2025-11-25 20:16:52.185642158 +0000 UTC m=+2594.102004524" observedRunningTime="2025-11-25 20:16:52.72323346 +0000 UTC m=+2594.639595846" watchObservedRunningTime="2025-11-25 20:16:52.731308178 +0000 UTC m=+2594.647670534" Nov 25 20:18:41 crc kubenswrapper[4775]: I1125 20:18:41.070797 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:18:41 crc kubenswrapper[4775]: I1125 20:18:41.071553 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:19:11 crc kubenswrapper[4775]: I1125 20:19:11.070791 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:19:11 crc kubenswrapper[4775]: I1125 20:19:11.071548 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:19:41 crc kubenswrapper[4775]: I1125 20:19:41.070102 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:19:41 crc kubenswrapper[4775]: I1125 20:19:41.070922 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:19:41 crc kubenswrapper[4775]: I1125 20:19:41.071019 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 20:19:41 crc kubenswrapper[4775]: I1125 20:19:41.072242 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"104e8abf589fc59108ee29bedd8d7d01d4812635de9a1b845be0daeeb1ab5598"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:19:41 crc kubenswrapper[4775]: I1125 20:19:41.072350 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://104e8abf589fc59108ee29bedd8d7d01d4812635de9a1b845be0daeeb1ab5598" gracePeriod=600 Nov 25 20:19:41 crc kubenswrapper[4775]: I1125 20:19:41.540996 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="104e8abf589fc59108ee29bedd8d7d01d4812635de9a1b845be0daeeb1ab5598" exitCode=0 Nov 25 20:19:41 crc kubenswrapper[4775]: I1125 20:19:41.541227 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"104e8abf589fc59108ee29bedd8d7d01d4812635de9a1b845be0daeeb1ab5598"} Nov 25 20:19:41 crc kubenswrapper[4775]: I1125 20:19:41.541755 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf"} Nov 25 20:19:41 crc kubenswrapper[4775]: I1125 20:19:41.541790 4775 scope.go:117] "RemoveContainer" containerID="926c20b057a1b2e294c0c34568b3e6fddf8391cb0877fe1a974fba23a24cf717" Nov 25 20:21:41 crc kubenswrapper[4775]: I1125 20:21:41.070098 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:21:41 crc kubenswrapper[4775]: I1125 20:21:41.070730 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:21:57 crc kubenswrapper[4775]: I1125 20:21:57.196727 4775 generic.go:334] "Generic (PLEG): container finished" podID="c7dcb097-31de-4297-9671-ac2644323c39" containerID="c08c9b80b126347375f806fbd1762bc6e046254a030424537762f5b936462830" exitCode=0 Nov 25 20:21:57 crc kubenswrapper[4775]: I1125 20:21:57.196816 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" event={"ID":"c7dcb097-31de-4297-9671-ac2644323c39","Type":"ContainerDied","Data":"c08c9b80b126347375f806fbd1762bc6e046254a030424537762f5b936462830"} Nov 25 20:21:58 crc kubenswrapper[4775]: I1125 20:21:58.776181 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:21:58 crc kubenswrapper[4775]: I1125 20:21:58.912924 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-ceph\") pod \"c7dcb097-31de-4297-9671-ac2644323c39\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " Nov 25 20:21:58 crc kubenswrapper[4775]: I1125 20:21:58.913074 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmbvr\" (UniqueName: \"kubernetes.io/projected/c7dcb097-31de-4297-9671-ac2644323c39-kube-api-access-tmbvr\") pod \"c7dcb097-31de-4297-9671-ac2644323c39\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " Nov 25 20:21:58 crc kubenswrapper[4775]: I1125 20:21:58.913117 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-libvirt-combined-ca-bundle\") pod \"c7dcb097-31de-4297-9671-ac2644323c39\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " Nov 25 20:21:58 crc kubenswrapper[4775]: I1125 20:21:58.913139 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-libvirt-secret-0\") pod \"c7dcb097-31de-4297-9671-ac2644323c39\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " Nov 25 20:21:58 crc kubenswrapper[4775]: I1125 20:21:58.913192 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-ssh-key\") pod \"c7dcb097-31de-4297-9671-ac2644323c39\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " Nov 25 20:21:58 crc kubenswrapper[4775]: I1125 20:21:58.913259 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-inventory\") pod \"c7dcb097-31de-4297-9671-ac2644323c39\" (UID: \"c7dcb097-31de-4297-9671-ac2644323c39\") " Nov 25 20:21:58 crc kubenswrapper[4775]: I1125 20:21:58.918940 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7dcb097-31de-4297-9671-ac2644323c39-kube-api-access-tmbvr" (OuterVolumeSpecName: "kube-api-access-tmbvr") pod "c7dcb097-31de-4297-9671-ac2644323c39" (UID: "c7dcb097-31de-4297-9671-ac2644323c39"). InnerVolumeSpecName "kube-api-access-tmbvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:21:58 crc kubenswrapper[4775]: I1125 20:21:58.919507 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-ceph" (OuterVolumeSpecName: "ceph") pod "c7dcb097-31de-4297-9671-ac2644323c39" (UID: "c7dcb097-31de-4297-9671-ac2644323c39"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:21:58 crc kubenswrapper[4775]: I1125 20:21:58.921471 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c7dcb097-31de-4297-9671-ac2644323c39" (UID: "c7dcb097-31de-4297-9671-ac2644323c39"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:21:58 crc kubenswrapper[4775]: I1125 20:21:58.939400 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "c7dcb097-31de-4297-9671-ac2644323c39" (UID: "c7dcb097-31de-4297-9671-ac2644323c39"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:21:58 crc kubenswrapper[4775]: I1125 20:21:58.946344 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-inventory" (OuterVolumeSpecName: "inventory") pod "c7dcb097-31de-4297-9671-ac2644323c39" (UID: "c7dcb097-31de-4297-9671-ac2644323c39"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:21:58 crc kubenswrapper[4775]: I1125 20:21:58.950365 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c7dcb097-31de-4297-9671-ac2644323c39" (UID: "c7dcb097-31de-4297-9671-ac2644323c39"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.015843 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.015881 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmbvr\" (UniqueName: \"kubernetes.io/projected/c7dcb097-31de-4297-9671-ac2644323c39-kube-api-access-tmbvr\") on node \"crc\" DevicePath \"\"" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.015895 4775 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.015906 4775 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.015919 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.015930 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7dcb097-31de-4297-9671-ac2644323c39-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.231206 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" event={"ID":"c7dcb097-31de-4297-9671-ac2644323c39","Type":"ContainerDied","Data":"874045b5b9d6ebdfa3b6f8dd4835b22033accc41ca80ce215129b068859c0fb3"} Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.231689 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="874045b5b9d6ebdfa3b6f8dd4835b22033accc41ca80ce215129b068859c0fb3" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.231355 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8xqln" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.347257 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp"] Nov 25 20:21:59 crc kubenswrapper[4775]: E1125 20:21:59.347760 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7dcb097-31de-4297-9671-ac2644323c39" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.347787 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7dcb097-31de-4297-9671-ac2644323c39" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.347976 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7dcb097-31de-4297-9671-ac2644323c39" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.348995 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.352947 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.353047 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.353301 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.352973 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.353580 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.355117 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.356329 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.356408 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.356957 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n82wn" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.384341 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp"] Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.526442 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9gb9\" (UniqueName: \"kubernetes.io/projected/44903c24-1252-485e-a390-3e79df1a521c-kube-api-access-c9gb9\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.526481 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/44903c24-1252-485e-a390-3e79df1a521c-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.526532 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.526564 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.526595 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.526620 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/44903c24-1252-485e-a390-3e79df1a521c-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.526642 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.526738 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.526765 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.526782 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.526836 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.628427 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9gb9\" (UniqueName: \"kubernetes.io/projected/44903c24-1252-485e-a390-3e79df1a521c-kube-api-access-c9gb9\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.628474 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/44903c24-1252-485e-a390-3e79df1a521c-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.628522 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.628558 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.628591 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.628618 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/44903c24-1252-485e-a390-3e79df1a521c-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.628694 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.628726 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.628752 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.628768 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.628788 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.629775 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/44903c24-1252-485e-a390-3e79df1a521c-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.631451 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/44903c24-1252-485e-a390-3e79df1a521c-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.633438 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.633603 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.634034 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.634669 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.635133 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.635314 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.635560 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.638236 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.644491 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9gb9\" (UniqueName: \"kubernetes.io/projected/44903c24-1252-485e-a390-3e79df1a521c-kube-api-access-c9gb9\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:21:59 crc kubenswrapper[4775]: I1125 20:21:59.666888 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:22:00 crc kubenswrapper[4775]: I1125 20:22:00.013785 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp"] Nov 25 20:22:00 crc kubenswrapper[4775]: I1125 20:22:00.023865 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:22:00 crc kubenswrapper[4775]: I1125 20:22:00.241111 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" event={"ID":"44903c24-1252-485e-a390-3e79df1a521c","Type":"ContainerStarted","Data":"ecafeea0f5137d42c9e9365480d9d51a90048843361696285473191d6f496ed1"} Nov 25 20:22:01 crc kubenswrapper[4775]: I1125 20:22:01.249508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" event={"ID":"44903c24-1252-485e-a390-3e79df1a521c","Type":"ContainerStarted","Data":"89af60dbb81943f9f3469196935eaec8ec8587a2a63b5ed0f1e29a3d3067a46e"} Nov 25 20:22:01 crc kubenswrapper[4775]: I1125 20:22:01.279008 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" podStartSLOduration=1.790306389 podStartE2EDuration="2.278989457s" podCreationTimestamp="2025-11-25 20:21:59 +0000 UTC" firstStartedPulling="2025-11-25 20:22:00.023596258 +0000 UTC m=+2901.939958634" lastFinishedPulling="2025-11-25 20:22:00.512279306 +0000 UTC m=+2902.428641702" observedRunningTime="2025-11-25 20:22:01.271638977 +0000 UTC m=+2903.188001343" watchObservedRunningTime="2025-11-25 20:22:01.278989457 +0000 UTC m=+2903.195351813" Nov 25 20:22:11 crc kubenswrapper[4775]: I1125 20:22:11.070142 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:22:11 crc kubenswrapper[4775]: I1125 20:22:11.070615 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:22:40 crc kubenswrapper[4775]: I1125 20:22:40.180906 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tb2vz"] Nov 25 20:22:40 crc kubenswrapper[4775]: I1125 20:22:40.186176 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:40 crc kubenswrapper[4775]: I1125 20:22:40.219905 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tb2vz"] Nov 25 20:22:40 crc kubenswrapper[4775]: I1125 20:22:40.249305 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-utilities\") pod \"certified-operators-tb2vz\" (UID: \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\") " pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:40 crc kubenswrapper[4775]: I1125 20:22:40.249387 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l49jt\" (UniqueName: \"kubernetes.io/projected/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-kube-api-access-l49jt\") pod \"certified-operators-tb2vz\" (UID: \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\") " pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:40 crc kubenswrapper[4775]: I1125 20:22:40.249817 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-catalog-content\") pod \"certified-operators-tb2vz\" (UID: \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\") " pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:40 crc kubenswrapper[4775]: I1125 20:22:40.351584 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-catalog-content\") pod \"certified-operators-tb2vz\" (UID: \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\") " pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:40 crc kubenswrapper[4775]: I1125 20:22:40.351737 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-utilities\") pod \"certified-operators-tb2vz\" (UID: \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\") " pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:40 crc kubenswrapper[4775]: I1125 20:22:40.351768 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l49jt\" (UniqueName: \"kubernetes.io/projected/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-kube-api-access-l49jt\") pod \"certified-operators-tb2vz\" (UID: \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\") " pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:40 crc kubenswrapper[4775]: I1125 20:22:40.352610 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-catalog-content\") pod \"certified-operators-tb2vz\" (UID: \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\") " pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:40 crc kubenswrapper[4775]: I1125 20:22:40.352884 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-utilities\") pod \"certified-operators-tb2vz\" (UID: \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\") " pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:40 crc kubenswrapper[4775]: I1125 20:22:40.373509 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l49jt\" (UniqueName: \"kubernetes.io/projected/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-kube-api-access-l49jt\") pod \"certified-operators-tb2vz\" (UID: \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\") " pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:40 crc kubenswrapper[4775]: I1125 20:22:40.517787 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:41 crc kubenswrapper[4775]: I1125 20:22:41.070113 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:22:41 crc kubenswrapper[4775]: I1125 20:22:41.070852 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:22:41 crc kubenswrapper[4775]: I1125 20:22:41.070904 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 20:22:41 crc kubenswrapper[4775]: I1125 20:22:41.071869 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:22:41 crc kubenswrapper[4775]: I1125 20:22:41.071941 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" gracePeriod=600 Nov 25 20:22:41 crc kubenswrapper[4775]: I1125 20:22:41.072952 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tb2vz"] Nov 25 20:22:41 crc kubenswrapper[4775]: E1125 20:22:41.196315 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:22:41 crc kubenswrapper[4775]: I1125 20:22:41.701989 4775 generic.go:334] "Generic (PLEG): container finished" podID="3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" containerID="f8ca79405a3ee76b71d0f5ef9a1c2eff4cf3fe9cd810f254e1130fb6a1b7a0ab" exitCode=0 Nov 25 20:22:41 crc kubenswrapper[4775]: I1125 20:22:41.702039 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb2vz" event={"ID":"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e","Type":"ContainerDied","Data":"f8ca79405a3ee76b71d0f5ef9a1c2eff4cf3fe9cd810f254e1130fb6a1b7a0ab"} Nov 25 20:22:41 crc kubenswrapper[4775]: I1125 20:22:41.702089 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb2vz" event={"ID":"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e","Type":"ContainerStarted","Data":"fae958958e83bee440dd960d37e4bfab20185606caf11bbedfdbddfa089e8b12"} Nov 25 20:22:41 crc kubenswrapper[4775]: I1125 20:22:41.705398 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" exitCode=0 Nov 25 20:22:41 crc kubenswrapper[4775]: I1125 20:22:41.705466 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf"} Nov 25 20:22:41 crc kubenswrapper[4775]: I1125 20:22:41.705524 4775 scope.go:117] "RemoveContainer" containerID="104e8abf589fc59108ee29bedd8d7d01d4812635de9a1b845be0daeeb1ab5598" Nov 25 20:22:41 crc kubenswrapper[4775]: I1125 20:22:41.706312 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:22:41 crc kubenswrapper[4775]: E1125 20:22:41.706591 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:22:42 crc kubenswrapper[4775]: I1125 20:22:42.715920 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb2vz" event={"ID":"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e","Type":"ContainerStarted","Data":"4b714801ed20b1b150c98bf754a0cfc5361f39c38dd46b7e84fe66b87b93b628"} Nov 25 20:22:43 crc kubenswrapper[4775]: I1125 20:22:43.733020 4775 generic.go:334] "Generic (PLEG): container finished" podID="3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" containerID="4b714801ed20b1b150c98bf754a0cfc5361f39c38dd46b7e84fe66b87b93b628" exitCode=0 Nov 25 20:22:43 crc kubenswrapper[4775]: I1125 20:22:43.733129 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb2vz" event={"ID":"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e","Type":"ContainerDied","Data":"4b714801ed20b1b150c98bf754a0cfc5361f39c38dd46b7e84fe66b87b93b628"} Nov 25 20:22:44 crc kubenswrapper[4775]: I1125 20:22:44.749943 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb2vz" event={"ID":"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e","Type":"ContainerStarted","Data":"a6cecf5527d228f937e3360c640bb37693fb186512e5290f80f4a87af9792e32"} Nov 25 20:22:44 crc kubenswrapper[4775]: I1125 20:22:44.791666 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tb2vz" podStartSLOduration=2.170770543 podStartE2EDuration="4.791632246s" podCreationTimestamp="2025-11-25 20:22:40 +0000 UTC" firstStartedPulling="2025-11-25 20:22:41.703568428 +0000 UTC m=+2943.619930814" lastFinishedPulling="2025-11-25 20:22:44.324430111 +0000 UTC m=+2946.240792517" observedRunningTime="2025-11-25 20:22:44.776268589 +0000 UTC m=+2946.692630965" watchObservedRunningTime="2025-11-25 20:22:44.791632246 +0000 UTC m=+2946.707994622" Nov 25 20:22:50 crc kubenswrapper[4775]: I1125 20:22:50.517930 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:50 crc kubenswrapper[4775]: I1125 20:22:50.518512 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:50 crc kubenswrapper[4775]: I1125 20:22:50.586538 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:50 crc kubenswrapper[4775]: I1125 20:22:50.882765 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:50 crc kubenswrapper[4775]: I1125 20:22:50.946286 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tb2vz"] Nov 25 20:22:52 crc kubenswrapper[4775]: I1125 20:22:52.827954 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tb2vz" podUID="3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" containerName="registry-server" containerID="cri-o://a6cecf5527d228f937e3360c640bb37693fb186512e5290f80f4a87af9792e32" gracePeriod=2 Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.314984 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.492167 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-catalog-content\") pod \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\" (UID: \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\") " Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.492235 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l49jt\" (UniqueName: \"kubernetes.io/projected/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-kube-api-access-l49jt\") pod \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\" (UID: \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\") " Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.492291 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-utilities\") pod \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\" (UID: \"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e\") " Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.493429 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-utilities" (OuterVolumeSpecName: "utilities") pod "3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" (UID: "3c94e52a-c757-49cd-8a4a-3e4a0b602c0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.498440 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-kube-api-access-l49jt" (OuterVolumeSpecName: "kube-api-access-l49jt") pod "3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" (UID: "3c94e52a-c757-49cd-8a4a-3e4a0b602c0e"). InnerVolumeSpecName "kube-api-access-l49jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.594098 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l49jt\" (UniqueName: \"kubernetes.io/projected/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-kube-api-access-l49jt\") on node \"crc\" DevicePath \"\"" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.594139 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.691106 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" (UID: "3c94e52a-c757-49cd-8a4a-3e4a0b602c0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.695680 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.842482 4775 generic.go:334] "Generic (PLEG): container finished" podID="3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" containerID="a6cecf5527d228f937e3360c640bb37693fb186512e5290f80f4a87af9792e32" exitCode=0 Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.842539 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb2vz" event={"ID":"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e","Type":"ContainerDied","Data":"a6cecf5527d228f937e3360c640bb37693fb186512e5290f80f4a87af9792e32"} Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.842574 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb2vz" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.842609 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb2vz" event={"ID":"3c94e52a-c757-49cd-8a4a-3e4a0b602c0e","Type":"ContainerDied","Data":"fae958958e83bee440dd960d37e4bfab20185606caf11bbedfdbddfa089e8b12"} Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.842640 4775 scope.go:117] "RemoveContainer" containerID="a6cecf5527d228f937e3360c640bb37693fb186512e5290f80f4a87af9792e32" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.888442 4775 scope.go:117] "RemoveContainer" containerID="4b714801ed20b1b150c98bf754a0cfc5361f39c38dd46b7e84fe66b87b93b628" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.899003 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tb2vz"] Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.910722 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tb2vz"] Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.916670 4775 scope.go:117] "RemoveContainer" containerID="f8ca79405a3ee76b71d0f5ef9a1c2eff4cf3fe9cd810f254e1130fb6a1b7a0ab" Nov 25 20:22:53 crc kubenswrapper[4775]: E1125 20:22:53.949988 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c94e52a_c757_49cd_8a4a_3e4a0b602c0e.slice\": RecentStats: unable to find data in memory cache]" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.973053 4775 scope.go:117] "RemoveContainer" containerID="a6cecf5527d228f937e3360c640bb37693fb186512e5290f80f4a87af9792e32" Nov 25 20:22:53 crc kubenswrapper[4775]: E1125 20:22:53.973864 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6cecf5527d228f937e3360c640bb37693fb186512e5290f80f4a87af9792e32\": container with ID starting with a6cecf5527d228f937e3360c640bb37693fb186512e5290f80f4a87af9792e32 not found: ID does not exist" containerID="a6cecf5527d228f937e3360c640bb37693fb186512e5290f80f4a87af9792e32" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.973909 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6cecf5527d228f937e3360c640bb37693fb186512e5290f80f4a87af9792e32"} err="failed to get container status \"a6cecf5527d228f937e3360c640bb37693fb186512e5290f80f4a87af9792e32\": rpc error: code = NotFound desc = could not find container \"a6cecf5527d228f937e3360c640bb37693fb186512e5290f80f4a87af9792e32\": container with ID starting with a6cecf5527d228f937e3360c640bb37693fb186512e5290f80f4a87af9792e32 not found: ID does not exist" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.973943 4775 scope.go:117] "RemoveContainer" containerID="4b714801ed20b1b150c98bf754a0cfc5361f39c38dd46b7e84fe66b87b93b628" Nov 25 20:22:53 crc kubenswrapper[4775]: E1125 20:22:53.974314 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b714801ed20b1b150c98bf754a0cfc5361f39c38dd46b7e84fe66b87b93b628\": container with ID starting with 4b714801ed20b1b150c98bf754a0cfc5361f39c38dd46b7e84fe66b87b93b628 not found: ID does not exist" containerID="4b714801ed20b1b150c98bf754a0cfc5361f39c38dd46b7e84fe66b87b93b628" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.974352 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b714801ed20b1b150c98bf754a0cfc5361f39c38dd46b7e84fe66b87b93b628"} err="failed to get container status \"4b714801ed20b1b150c98bf754a0cfc5361f39c38dd46b7e84fe66b87b93b628\": rpc error: code = NotFound desc = could not find container \"4b714801ed20b1b150c98bf754a0cfc5361f39c38dd46b7e84fe66b87b93b628\": container with ID starting with 4b714801ed20b1b150c98bf754a0cfc5361f39c38dd46b7e84fe66b87b93b628 not found: ID does not exist" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.974380 4775 scope.go:117] "RemoveContainer" containerID="f8ca79405a3ee76b71d0f5ef9a1c2eff4cf3fe9cd810f254e1130fb6a1b7a0ab" Nov 25 20:22:53 crc kubenswrapper[4775]: E1125 20:22:53.974861 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ca79405a3ee76b71d0f5ef9a1c2eff4cf3fe9cd810f254e1130fb6a1b7a0ab\": container with ID starting with f8ca79405a3ee76b71d0f5ef9a1c2eff4cf3fe9cd810f254e1130fb6a1b7a0ab not found: ID does not exist" containerID="f8ca79405a3ee76b71d0f5ef9a1c2eff4cf3fe9cd810f254e1130fb6a1b7a0ab" Nov 25 20:22:53 crc kubenswrapper[4775]: I1125 20:22:53.974895 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ca79405a3ee76b71d0f5ef9a1c2eff4cf3fe9cd810f254e1130fb6a1b7a0ab"} err="failed to get container status \"f8ca79405a3ee76b71d0f5ef9a1c2eff4cf3fe9cd810f254e1130fb6a1b7a0ab\": rpc error: code = NotFound desc = could not find container \"f8ca79405a3ee76b71d0f5ef9a1c2eff4cf3fe9cd810f254e1130fb6a1b7a0ab\": container with ID starting with f8ca79405a3ee76b71d0f5ef9a1c2eff4cf3fe9cd810f254e1130fb6a1b7a0ab not found: ID does not exist" Nov 25 20:22:54 crc kubenswrapper[4775]: I1125 20:22:54.847824 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:22:54 crc kubenswrapper[4775]: E1125 20:22:54.848214 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:22:54 crc kubenswrapper[4775]: I1125 20:22:54.861446 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" path="/var/lib/kubelet/pods/3c94e52a-c757-49cd-8a4a-3e4a0b602c0e/volumes" Nov 25 20:23:06 crc kubenswrapper[4775]: I1125 20:23:06.848444 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:23:06 crc kubenswrapper[4775]: E1125 20:23:06.849681 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:23:18 crc kubenswrapper[4775]: I1125 20:23:18.857952 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:23:18 crc kubenswrapper[4775]: E1125 20:23:18.859062 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:23:29 crc kubenswrapper[4775]: I1125 20:23:29.847834 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:23:29 crc kubenswrapper[4775]: E1125 20:23:29.849203 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:23:43 crc kubenswrapper[4775]: I1125 20:23:43.847181 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:23:43 crc kubenswrapper[4775]: E1125 20:23:43.847964 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:23:58 crc kubenswrapper[4775]: I1125 20:23:58.862642 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:23:58 crc kubenswrapper[4775]: E1125 20:23:58.866684 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:24:09 crc kubenswrapper[4775]: I1125 20:24:09.849908 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:24:09 crc kubenswrapper[4775]: E1125 20:24:09.851029 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:24:24 crc kubenswrapper[4775]: I1125 20:24:24.847263 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:24:24 crc kubenswrapper[4775]: E1125 20:24:24.848049 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:24:38 crc kubenswrapper[4775]: I1125 20:24:38.853561 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:24:38 crc kubenswrapper[4775]: E1125 20:24:38.855276 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:24:49 crc kubenswrapper[4775]: I1125 20:24:49.847937 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:24:49 crc kubenswrapper[4775]: E1125 20:24:49.848885 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:25:04 crc kubenswrapper[4775]: I1125 20:25:04.847542 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:25:04 crc kubenswrapper[4775]: E1125 20:25:04.848923 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:25:18 crc kubenswrapper[4775]: I1125 20:25:18.856353 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:25:18 crc kubenswrapper[4775]: E1125 20:25:18.858299 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:25:30 crc kubenswrapper[4775]: I1125 20:25:30.847755 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:25:30 crc kubenswrapper[4775]: E1125 20:25:30.848893 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:25:31 crc kubenswrapper[4775]: I1125 20:25:31.600850 4775 generic.go:334] "Generic (PLEG): container finished" podID="44903c24-1252-485e-a390-3e79df1a521c" containerID="89af60dbb81943f9f3469196935eaec8ec8587a2a63b5ed0f1e29a3d3067a46e" exitCode=0 Nov 25 20:25:31 crc kubenswrapper[4775]: I1125 20:25:31.600887 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" event={"ID":"44903c24-1252-485e-a390-3e79df1a521c","Type":"ContainerDied","Data":"89af60dbb81943f9f3469196935eaec8ec8587a2a63b5ed0f1e29a3d3067a46e"} Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.116878 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.230840 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9gb9\" (UniqueName: \"kubernetes.io/projected/44903c24-1252-485e-a390-3e79df1a521c-kube-api-access-c9gb9\") pod \"44903c24-1252-485e-a390-3e79df1a521c\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.230906 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-migration-ssh-key-1\") pod \"44903c24-1252-485e-a390-3e79df1a521c\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.230953 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-inventory\") pod \"44903c24-1252-485e-a390-3e79df1a521c\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.231029 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-ceph\") pod \"44903c24-1252-485e-a390-3e79df1a521c\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.231051 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/44903c24-1252-485e-a390-3e79df1a521c-nova-extra-config-0\") pod \"44903c24-1252-485e-a390-3e79df1a521c\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.231078 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-cell1-compute-config-0\") pod \"44903c24-1252-485e-a390-3e79df1a521c\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.231098 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-cell1-compute-config-1\") pod \"44903c24-1252-485e-a390-3e79df1a521c\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.231191 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-custom-ceph-combined-ca-bundle\") pod \"44903c24-1252-485e-a390-3e79df1a521c\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.231293 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-ssh-key\") pod \"44903c24-1252-485e-a390-3e79df1a521c\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.231314 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/44903c24-1252-485e-a390-3e79df1a521c-ceph-nova-0\") pod \"44903c24-1252-485e-a390-3e79df1a521c\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.231350 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-migration-ssh-key-0\") pod \"44903c24-1252-485e-a390-3e79df1a521c\" (UID: \"44903c24-1252-485e-a390-3e79df1a521c\") " Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.238350 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "44903c24-1252-485e-a390-3e79df1a521c" (UID: "44903c24-1252-485e-a390-3e79df1a521c"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.239168 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-ceph" (OuterVolumeSpecName: "ceph") pod "44903c24-1252-485e-a390-3e79df1a521c" (UID: "44903c24-1252-485e-a390-3e79df1a521c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.246830 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44903c24-1252-485e-a390-3e79df1a521c-kube-api-access-c9gb9" (OuterVolumeSpecName: "kube-api-access-c9gb9") pod "44903c24-1252-485e-a390-3e79df1a521c" (UID: "44903c24-1252-485e-a390-3e79df1a521c"). InnerVolumeSpecName "kube-api-access-c9gb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.273098 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "44903c24-1252-485e-a390-3e79df1a521c" (UID: "44903c24-1252-485e-a390-3e79df1a521c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.273735 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "44903c24-1252-485e-a390-3e79df1a521c" (UID: "44903c24-1252-485e-a390-3e79df1a521c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.276204 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-inventory" (OuterVolumeSpecName: "inventory") pod "44903c24-1252-485e-a390-3e79df1a521c" (UID: "44903c24-1252-485e-a390-3e79df1a521c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.276057 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44903c24-1252-485e-a390-3e79df1a521c-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "44903c24-1252-485e-a390-3e79df1a521c" (UID: "44903c24-1252-485e-a390-3e79df1a521c"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.279388 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44903c24-1252-485e-a390-3e79df1a521c-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "44903c24-1252-485e-a390-3e79df1a521c" (UID: "44903c24-1252-485e-a390-3e79df1a521c"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.282315 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "44903c24-1252-485e-a390-3e79df1a521c" (UID: "44903c24-1252-485e-a390-3e79df1a521c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.284105 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "44903c24-1252-485e-a390-3e79df1a521c" (UID: "44903c24-1252-485e-a390-3e79df1a521c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.288281 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "44903c24-1252-485e-a390-3e79df1a521c" (UID: "44903c24-1252-485e-a390-3e79df1a521c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.333531 4775 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.333581 4775 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.333601 4775 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/44903c24-1252-485e-a390-3e79df1a521c-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.333619 4775 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.333637 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9gb9\" (UniqueName: \"kubernetes.io/projected/44903c24-1252-485e-a390-3e79df1a521c-kube-api-access-c9gb9\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.333675 4775 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.333693 4775 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.333710 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.333726 4775 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/44903c24-1252-485e-a390-3e79df1a521c-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.333743 4775 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.333759 4775 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/44903c24-1252-485e-a390-3e79df1a521c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.625824 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" event={"ID":"44903c24-1252-485e-a390-3e79df1a521c","Type":"ContainerDied","Data":"ecafeea0f5137d42c9e9365480d9d51a90048843361696285473191d6f496ed1"} Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.625886 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecafeea0f5137d42c9e9365480d9d51a90048843361696285473191d6f496ed1" Nov 25 20:25:33 crc kubenswrapper[4775]: I1125 20:25:33.625961 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp" Nov 25 20:25:42 crc kubenswrapper[4775]: I1125 20:25:42.847932 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:25:42 crc kubenswrapper[4775]: E1125 20:25:42.848783 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.372847 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 25 20:25:47 crc kubenswrapper[4775]: E1125 20:25:47.373570 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44903c24-1252-485e-a390-3e79df1a521c" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.373589 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="44903c24-1252-485e-a390-3e79df1a521c" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 25 20:25:47 crc kubenswrapper[4775]: E1125 20:25:47.373607 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" containerName="extract-utilities" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.373617 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" containerName="extract-utilities" Nov 25 20:25:47 crc kubenswrapper[4775]: E1125 20:25:47.373632 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" containerName="registry-server" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.373641 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" containerName="registry-server" Nov 25 20:25:47 crc kubenswrapper[4775]: E1125 20:25:47.373696 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" containerName="extract-content" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.373706 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" containerName="extract-content" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.373931 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="44903c24-1252-485e-a390-3e79df1a521c" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.373944 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c94e52a-c757-49cd-8a4a-3e4a0b602c0e" containerName="registry-server" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.375345 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.377310 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.377557 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.383518 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433147 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4638194-7edd-4cd4-bf51-e044eb343d94-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433206 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433227 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-run\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433251 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433270 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433300 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433321 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433336 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-dev\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433360 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4638194-7edd-4cd4-bf51-e044eb343d94-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433406 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drbt8\" (UniqueName: \"kubernetes.io/projected/c4638194-7edd-4cd4-bf51-e044eb343d94-kube-api-access-drbt8\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433432 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4638194-7edd-4cd4-bf51-e044eb343d94-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433450 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433470 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4638194-7edd-4cd4-bf51-e044eb343d94-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433488 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433502 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4638194-7edd-4cd4-bf51-e044eb343d94-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.433535 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-sys\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.448751 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.450178 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.452753 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.470013 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.534942 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.534980 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535001 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535018 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-dev\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535037 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0119f9f6-2582-47b7-a5a0-cfd393da9234-scripts\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535061 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4638194-7edd-4cd4-bf51-e044eb343d94-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535074 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-sys\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535093 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535112 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0119f9f6-2582-47b7-a5a0-cfd393da9234-ceph\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535136 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0119f9f6-2582-47b7-a5a0-cfd393da9234-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535163 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drbt8\" (UniqueName: \"kubernetes.io/projected/c4638194-7edd-4cd4-bf51-e044eb343d94-kube-api-access-drbt8\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535178 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-dev\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535195 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535195 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535216 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4638194-7edd-4cd4-bf51-e044eb343d94-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535233 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535253 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4638194-7edd-4cd4-bf51-e044eb343d94-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535270 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4638194-7edd-4cd4-bf51-e044eb343d94-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535286 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535307 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0119f9f6-2582-47b7-a5a0-cfd393da9234-config-data\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535322 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-lib-modules\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535354 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-sys\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535376 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-dev\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535380 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-run\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535951 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-sys\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535353 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.535902 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536304 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4638194-7edd-4cd4-bf51-e044eb343d94-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536367 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536388 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536438 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536454 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536472 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-run\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536509 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0119f9f6-2582-47b7-a5a0-cfd393da9234-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536527 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r829\" (UniqueName: \"kubernetes.io/projected/0119f9f6-2582-47b7-a5a0-cfd393da9234-kube-api-access-7r829\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536562 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536587 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536740 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536874 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536907 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-run\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.536927 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.537003 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c4638194-7edd-4cd4-bf51-e044eb343d94-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.541631 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4638194-7edd-4cd4-bf51-e044eb343d94-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.545041 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4638194-7edd-4cd4-bf51-e044eb343d94-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.545814 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4638194-7edd-4cd4-bf51-e044eb343d94-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.550595 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4638194-7edd-4cd4-bf51-e044eb343d94-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.553017 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drbt8\" (UniqueName: \"kubernetes.io/projected/c4638194-7edd-4cd4-bf51-e044eb343d94-kube-api-access-drbt8\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.553434 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4638194-7edd-4cd4-bf51-e044eb343d94-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"c4638194-7edd-4cd4-bf51-e044eb343d94\") " pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.638661 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.638922 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0119f9f6-2582-47b7-a5a0-cfd393da9234-scripts\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.638948 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-sys\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.638968 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.638990 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0119f9f6-2582-47b7-a5a0-cfd393da9234-ceph\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639014 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0119f9f6-2582-47b7-a5a0-cfd393da9234-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639039 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-dev\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639060 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639095 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0119f9f6-2582-47b7-a5a0-cfd393da9234-config-data\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639111 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-lib-modules\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639142 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-run\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639169 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639185 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639209 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639228 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0119f9f6-2582-47b7-a5a0-cfd393da9234-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639244 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r829\" (UniqueName: \"kubernetes.io/projected/0119f9f6-2582-47b7-a5a0-cfd393da9234-kube-api-access-7r829\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.638805 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639736 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639800 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-run\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639799 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639859 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-lib-modules\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639875 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639903 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-sys\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639937 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639948 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-dev\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.639967 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0119f9f6-2582-47b7-a5a0-cfd393da9234-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.642489 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0119f9f6-2582-47b7-a5a0-cfd393da9234-scripts\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.642974 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0119f9f6-2582-47b7-a5a0-cfd393da9234-ceph\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.643434 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0119f9f6-2582-47b7-a5a0-cfd393da9234-config-data\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.643522 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0119f9f6-2582-47b7-a5a0-cfd393da9234-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.643600 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0119f9f6-2582-47b7-a5a0-cfd393da9234-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.658148 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r829\" (UniqueName: \"kubernetes.io/projected/0119f9f6-2582-47b7-a5a0-cfd393da9234-kube-api-access-7r829\") pod \"cinder-backup-0\" (UID: \"0119f9f6-2582-47b7-a5a0-cfd393da9234\") " pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.710164 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.770694 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.969924 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-lxq6k"] Nov 25 20:25:47 crc kubenswrapper[4775]: I1125 20:25:47.971352 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-lxq6k" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.016997 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-lxq6k"] Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.046686 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-4d88-account-create-update-v4ctp"] Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.047858 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-4d88-account-create-update-v4ctp" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.050031 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.065045 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-4d88-account-create-update-v4ctp"] Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.071691 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8695b6d995-cnfpw"] Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.074259 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.079690 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.079739 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-tgvmj" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.079696 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.079885 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.081498 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8695b6d995-cnfpw"] Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.093379 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xgn5\" (UniqueName: \"kubernetes.io/projected/72b57b64-f5b1-4f21-aafc-54a9ca0c1faa-kube-api-access-5xgn5\") pod \"manila-db-create-lxq6k\" (UID: \"72b57b64-f5b1-4f21-aafc-54a9ca0c1faa\") " pod="openstack/manila-db-create-lxq6k" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.093579 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72b57b64-f5b1-4f21-aafc-54a9ca0c1faa-operator-scripts\") pod \"manila-db-create-lxq6k\" (UID: \"72b57b64-f5b1-4f21-aafc-54a9ca0c1faa\") " pod="openstack/manila-db-create-lxq6k" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.151156 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6bf8598cd5-69z2f"] Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.152852 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.189802 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.194728 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d1939db-0c4f-45b0-9b3b-3d91590a9730-config-data\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.194779 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1939db-0c4f-45b0-9b3b-3d91590a9730-logs\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.194819 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72b57b64-f5b1-4f21-aafc-54a9ca0c1faa-operator-scripts\") pod \"manila-db-create-lxq6k\" (UID: \"72b57b64-f5b1-4f21-aafc-54a9ca0c1faa\") " pod="openstack/manila-db-create-lxq6k" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.194996 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d1939db-0c4f-45b0-9b3b-3d91590a9730-scripts\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.195192 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d1939db-0c4f-45b0-9b3b-3d91590a9730-horizon-secret-key\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.195235 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xgn5\" (UniqueName: \"kubernetes.io/projected/72b57b64-f5b1-4f21-aafc-54a9ca0c1faa-kube-api-access-5xgn5\") pod \"manila-db-create-lxq6k\" (UID: \"72b57b64-f5b1-4f21-aafc-54a9ca0c1faa\") " pod="openstack/manila-db-create-lxq6k" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.195260 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb30cfa2-754e-47a1-8dad-08d8ebe919a2-operator-scripts\") pod \"manila-4d88-account-create-update-v4ctp\" (UID: \"cb30cfa2-754e-47a1-8dad-08d8ebe919a2\") " pod="openstack/manila-4d88-account-create-update-v4ctp" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.195332 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxlvd\" (UniqueName: \"kubernetes.io/projected/cb30cfa2-754e-47a1-8dad-08d8ebe919a2-kube-api-access-bxlvd\") pod \"manila-4d88-account-create-update-v4ctp\" (UID: \"cb30cfa2-754e-47a1-8dad-08d8ebe919a2\") " pod="openstack/manila-4d88-account-create-update-v4ctp" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.195499 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2rcz\" (UniqueName: \"kubernetes.io/projected/8d1939db-0c4f-45b0-9b3b-3d91590a9730-kube-api-access-n2rcz\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.195551 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72b57b64-f5b1-4f21-aafc-54a9ca0c1faa-operator-scripts\") pod \"manila-db-create-lxq6k\" (UID: \"72b57b64-f5b1-4f21-aafc-54a9ca0c1faa\") " pod="openstack/manila-db-create-lxq6k" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.196178 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.198595 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.198979 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.199155 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dvz62" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.199286 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.208354 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bf8598cd5-69z2f"] Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.234578 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xgn5\" (UniqueName: \"kubernetes.io/projected/72b57b64-f5b1-4f21-aafc-54a9ca0c1faa-kube-api-access-5xgn5\") pod \"manila-db-create-lxq6k\" (UID: \"72b57b64-f5b1-4f21-aafc-54a9ca0c1faa\") " pod="openstack/manila-db-create-lxq6k" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.239793 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.296459 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.298301 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.298709 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d1939db-0c4f-45b0-9b3b-3d91590a9730-config-data\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.298770 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.298800 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-scripts\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.298823 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpnbg\" (UniqueName: \"kubernetes.io/projected/f540d713-b2ba-459b-84b8-714fe08f05ac-kube-api-access-mpnbg\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.298857 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1939db-0c4f-45b0-9b3b-3d91590a9730-logs\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.298906 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-ceph\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.298922 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.298944 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d1939db-0c4f-45b0-9b3b-3d91590a9730-scripts\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.298963 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-logs\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.298988 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f540d713-b2ba-459b-84b8-714fe08f05ac-horizon-secret-key\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.299011 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45zh\" (UniqueName: \"kubernetes.io/projected/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-kube-api-access-s45zh\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.299057 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-config-data\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.299092 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f540d713-b2ba-459b-84b8-714fe08f05ac-config-data\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.299108 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f540d713-b2ba-459b-84b8-714fe08f05ac-scripts\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.299147 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.299172 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d1939db-0c4f-45b0-9b3b-3d91590a9730-horizon-secret-key\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.299202 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb30cfa2-754e-47a1-8dad-08d8ebe919a2-operator-scripts\") pod \"manila-4d88-account-create-update-v4ctp\" (UID: \"cb30cfa2-754e-47a1-8dad-08d8ebe919a2\") " pod="openstack/manila-4d88-account-create-update-v4ctp" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.299228 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxlvd\" (UniqueName: \"kubernetes.io/projected/cb30cfa2-754e-47a1-8dad-08d8ebe919a2-kube-api-access-bxlvd\") pod \"manila-4d88-account-create-update-v4ctp\" (UID: \"cb30cfa2-754e-47a1-8dad-08d8ebe919a2\") " pod="openstack/manila-4d88-account-create-update-v4ctp" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.299255 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.299278 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2rcz\" (UniqueName: \"kubernetes.io/projected/8d1939db-0c4f-45b0-9b3b-3d91590a9730-kube-api-access-n2rcz\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.299302 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f540d713-b2ba-459b-84b8-714fe08f05ac-logs\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.299903 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1939db-0c4f-45b0-9b3b-3d91590a9730-logs\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.300711 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d1939db-0c4f-45b0-9b3b-3d91590a9730-config-data\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.302762 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d1939db-0c4f-45b0-9b3b-3d91590a9730-scripts\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.303160 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.303294 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.304463 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb30cfa2-754e-47a1-8dad-08d8ebe919a2-operator-scripts\") pod \"manila-4d88-account-create-update-v4ctp\" (UID: \"cb30cfa2-754e-47a1-8dad-08d8ebe919a2\") " pod="openstack/manila-4d88-account-create-update-v4ctp" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.312069 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d1939db-0c4f-45b0-9b3b-3d91590a9730-horizon-secret-key\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.312400 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-lxq6k" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.343396 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxlvd\" (UniqueName: \"kubernetes.io/projected/cb30cfa2-754e-47a1-8dad-08d8ebe919a2-kube-api-access-bxlvd\") pod \"manila-4d88-account-create-update-v4ctp\" (UID: \"cb30cfa2-754e-47a1-8dad-08d8ebe919a2\") " pod="openstack/manila-4d88-account-create-update-v4ctp" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.346935 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2rcz\" (UniqueName: \"kubernetes.io/projected/8d1939db-0c4f-45b0-9b3b-3d91590a9730-kube-api-access-n2rcz\") pod \"horizon-8695b6d995-cnfpw\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.362147 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.372411 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-4d88-account-create-update-v4ctp" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.397101 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.409822 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.409892 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.409921 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.409966 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-ceph\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.409988 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410022 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-logs\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410050 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410068 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qmjw\" (UniqueName: \"kubernetes.io/projected/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-kube-api-access-2qmjw\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410087 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f540d713-b2ba-459b-84b8-714fe08f05ac-horizon-secret-key\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410112 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s45zh\" (UniqueName: \"kubernetes.io/projected/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-kube-api-access-s45zh\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410152 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-config-data\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410168 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f540d713-b2ba-459b-84b8-714fe08f05ac-config-data\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410182 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f540d713-b2ba-459b-84b8-714fe08f05ac-scripts\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410203 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410232 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-logs\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410283 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410312 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410356 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410387 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f540d713-b2ba-459b-84b8-714fe08f05ac-logs\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410449 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410510 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410529 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-scripts\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.410544 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpnbg\" (UniqueName: \"kubernetes.io/projected/f540d713-b2ba-459b-84b8-714fe08f05ac-kube-api-access-mpnbg\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.423877 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f540d713-b2ba-459b-84b8-714fe08f05ac-config-data\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.426064 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.430383 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-logs\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.432057 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.432457 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f540d713-b2ba-459b-84b8-714fe08f05ac-logs\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.440369 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.444188 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f540d713-b2ba-459b-84b8-714fe08f05ac-scripts\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.446169 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-config-data\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.462881 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.464344 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-scripts\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.464920 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f540d713-b2ba-459b-84b8-714fe08f05ac-horizon-secret-key\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.465679 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpnbg\" (UniqueName: \"kubernetes.io/projected/f540d713-b2ba-459b-84b8-714fe08f05ac-kube-api-access-mpnbg\") pod \"horizon-6bf8598cd5-69z2f\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.476895 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.508240 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-ceph\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.510845 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s45zh\" (UniqueName: \"kubernetes.io/projected/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-kube-api-access-s45zh\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.511620 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.511698 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.511763 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.511780 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qmjw\" (UniqueName: \"kubernetes.io/projected/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-kube-api-access-2qmjw\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.511833 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.511857 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-logs\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.511894 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.511938 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.511992 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.513188 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.517598 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-logs\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.525933 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.531504 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.549136 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.552373 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qmjw\" (UniqueName: \"kubernetes.io/projected/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-kube-api-access-2qmjw\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.555473 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.562807 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.583615 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.589521 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.615951 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.624628 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.693526 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.698756 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.813721 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0119f9f6-2582-47b7-a5a0-cfd393da9234","Type":"ContainerStarted","Data":"8e9df3b8affb2f16192710b5a38e20368295f45c95b2b1a8dcf5e09ae6505f24"} Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.815149 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"c4638194-7edd-4cd4-bf51-e044eb343d94","Type":"ContainerStarted","Data":"7a667eabd8e2f6d513810962260fd559b3bef9d052abeacc5d0bf1c42d260e82"} Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.835393 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:25:48 crc kubenswrapper[4775]: I1125 20:25:48.993040 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8695b6d995-cnfpw"] Nov 25 20:25:49 crc kubenswrapper[4775]: I1125 20:25:49.041085 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-lxq6k"] Nov 25 20:25:49 crc kubenswrapper[4775]: W1125 20:25:49.042290 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb30cfa2_754e_47a1_8dad_08d8ebe919a2.slice/crio-91d3d5a84c30d1a11502777fd267feee99dcbc4e5641d381a1d064f108132b7f WatchSource:0}: Error finding container 91d3d5a84c30d1a11502777fd267feee99dcbc4e5641d381a1d064f108132b7f: Status 404 returned error can't find the container with id 91d3d5a84c30d1a11502777fd267feee99dcbc4e5641d381a1d064f108132b7f Nov 25 20:25:49 crc kubenswrapper[4775]: I1125 20:25:49.054965 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-4d88-account-create-update-v4ctp"] Nov 25 20:25:49 crc kubenswrapper[4775]: I1125 20:25:49.277993 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bf8598cd5-69z2f"] Nov 25 20:25:49 crc kubenswrapper[4775]: I1125 20:25:49.369311 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:25:49 crc kubenswrapper[4775]: I1125 20:25:49.830673 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8695b6d995-cnfpw" event={"ID":"8d1939db-0c4f-45b0-9b3b-3d91590a9730","Type":"ContainerStarted","Data":"b4281707f3815fb4659fda8a78479c3f9e53f32359f99d280670c24f7705ed4c"} Nov 25 20:25:49 crc kubenswrapper[4775]: I1125 20:25:49.833592 4775 generic.go:334] "Generic (PLEG): container finished" podID="72b57b64-f5b1-4f21-aafc-54a9ca0c1faa" containerID="9abc99296482f8443e2ee099968bd77645ac719791ac75c65d42030624fc5b97" exitCode=0 Nov 25 20:25:49 crc kubenswrapper[4775]: I1125 20:25:49.833859 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-lxq6k" event={"ID":"72b57b64-f5b1-4f21-aafc-54a9ca0c1faa","Type":"ContainerDied","Data":"9abc99296482f8443e2ee099968bd77645ac719791ac75c65d42030624fc5b97"} Nov 25 20:25:49 crc kubenswrapper[4775]: I1125 20:25:49.833907 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-lxq6k" event={"ID":"72b57b64-f5b1-4f21-aafc-54a9ca0c1faa","Type":"ContainerStarted","Data":"9dccf7e99cbac552f81e535e9034c9f34e0085dc23181626a441def6cec55886"} Nov 25 20:25:49 crc kubenswrapper[4775]: I1125 20:25:49.837153 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bf8598cd5-69z2f" event={"ID":"f540d713-b2ba-459b-84b8-714fe08f05ac","Type":"ContainerStarted","Data":"74d5b3ab567bb083d26d25071ebcee1fa16bb0596475254df5c01d623cdc935c"} Nov 25 20:25:49 crc kubenswrapper[4775]: I1125 20:25:49.838918 4775 generic.go:334] "Generic (PLEG): container finished" podID="cb30cfa2-754e-47a1-8dad-08d8ebe919a2" containerID="5fa907e2c71a32aa614ed0f2ea0c26cf779009ca44d7a7b7c46479b41b8f7787" exitCode=0 Nov 25 20:25:49 crc kubenswrapper[4775]: I1125 20:25:49.838983 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-4d88-account-create-update-v4ctp" event={"ID":"cb30cfa2-754e-47a1-8dad-08d8ebe919a2","Type":"ContainerDied","Data":"5fa907e2c71a32aa614ed0f2ea0c26cf779009ca44d7a7b7c46479b41b8f7787"} Nov 25 20:25:49 crc kubenswrapper[4775]: I1125 20:25:49.839003 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-4d88-account-create-update-v4ctp" event={"ID":"cb30cfa2-754e-47a1-8dad-08d8ebe919a2","Type":"ContainerStarted","Data":"91d3d5a84c30d1a11502777fd267feee99dcbc4e5641d381a1d064f108132b7f"} Nov 25 20:25:49 crc kubenswrapper[4775]: I1125 20:25:49.841580 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47","Type":"ContainerStarted","Data":"fd8f2e9166fd08a4ef364863a1ab8cd6604d08122615dd6c56d7235267d8492d"} Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.019348 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:25:50 crc kubenswrapper[4775]: W1125 20:25:50.028819 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3feaa9a3_7a4c_4bf1_a5ec_5abaa6c6493b.slice/crio-9f4e040311c3218759539ab27a3ea4394813bc87f937c1749136699659892ddc WatchSource:0}: Error finding container 9f4e040311c3218759539ab27a3ea4394813bc87f937c1749136699659892ddc: Status 404 returned error can't find the container with id 9f4e040311c3218759539ab27a3ea4394813bc87f937c1749136699659892ddc Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.592052 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6bf8598cd5-69z2f"] Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.614263 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7679659b64-d62zj"] Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.615914 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.624151 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.633370 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7679659b64-d62zj"] Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.666309 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-horizon-tls-certs\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.674753 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-horizon-secret-key\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.674851 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5501671-0373-42f3-b08b-2b0c4c6049fa-scripts\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.675003 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-combined-ca-bundle\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.675029 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5501671-0373-42f3-b08b-2b0c4c6049fa-config-data\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.675071 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82754\" (UniqueName: \"kubernetes.io/projected/e5501671-0373-42f3-b08b-2b0c4c6049fa-kube-api-access-82754\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.675141 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5501671-0373-42f3-b08b-2b0c4c6049fa-logs\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.675267 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8695b6d995-cnfpw"] Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.719463 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77ddd59696-rlw9m"] Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.720962 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.746931 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.760467 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77ddd59696-rlw9m"] Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.785473 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6f1f978-0027-4119-8469-5acf67c75746-config-data\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.785568 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqhtn\" (UniqueName: \"kubernetes.io/projected/d6f1f978-0027-4119-8469-5acf67c75746-kube-api-access-jqhtn\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.785598 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-combined-ca-bundle\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.785632 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5501671-0373-42f3-b08b-2b0c4c6049fa-config-data\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.785666 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f1f978-0027-4119-8469-5acf67c75746-horizon-tls-certs\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.785699 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82754\" (UniqueName: \"kubernetes.io/projected/e5501671-0373-42f3-b08b-2b0c4c6049fa-kube-api-access-82754\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.785747 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5501671-0373-42f3-b08b-2b0c4c6049fa-logs\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.785780 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6f1f978-0027-4119-8469-5acf67c75746-horizon-secret-key\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.785810 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6f1f978-0027-4119-8469-5acf67c75746-scripts\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.785891 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f1f978-0027-4119-8469-5acf67c75746-combined-ca-bundle\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.785918 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6f1f978-0027-4119-8469-5acf67c75746-logs\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.785942 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-horizon-tls-certs\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.786000 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-horizon-secret-key\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.786029 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5501671-0373-42f3-b08b-2b0c4c6049fa-scripts\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.787378 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5501671-0373-42f3-b08b-2b0c4c6049fa-logs\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.788634 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5501671-0373-42f3-b08b-2b0c4c6049fa-scripts\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.789834 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5501671-0373-42f3-b08b-2b0c4c6049fa-config-data\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.795548 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.802336 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-horizon-secret-key\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.837229 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-horizon-tls-certs\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.840477 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-combined-ca-bundle\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.841327 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82754\" (UniqueName: \"kubernetes.io/projected/e5501671-0373-42f3-b08b-2b0c4c6049fa-kube-api-access-82754\") pod \"horizon-7679659b64-d62zj\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.894305 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqhtn\" (UniqueName: \"kubernetes.io/projected/d6f1f978-0027-4119-8469-5acf67c75746-kube-api-access-jqhtn\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.894359 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f1f978-0027-4119-8469-5acf67c75746-horizon-tls-certs\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.894425 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6f1f978-0027-4119-8469-5acf67c75746-horizon-secret-key\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.894446 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6f1f978-0027-4119-8469-5acf67c75746-scripts\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.894496 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f1f978-0027-4119-8469-5acf67c75746-combined-ca-bundle\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.894514 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6f1f978-0027-4119-8469-5acf67c75746-logs\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.894562 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6f1f978-0027-4119-8469-5acf67c75746-config-data\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.895317 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"c4638194-7edd-4cd4-bf51-e044eb343d94","Type":"ContainerStarted","Data":"8f0e9792a9427384cd8e4f9020d10d0c9505fba978edb0a380a77b15f9f665fd"} Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.895346 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"c4638194-7edd-4cd4-bf51-e044eb343d94","Type":"ContainerStarted","Data":"5ff3ffe895cb283147134e6fd5c489b539ddec884c2a5ea47d2fba470bfb70d7"} Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.901850 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6f1f978-0027-4119-8469-5acf67c75746-scripts\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.902882 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6f1f978-0027-4119-8469-5acf67c75746-config-data\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.905201 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6f1f978-0027-4119-8469-5acf67c75746-logs\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.916684 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f1f978-0027-4119-8469-5acf67c75746-horizon-tls-certs\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.917375 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0119f9f6-2582-47b7-a5a0-cfd393da9234","Type":"ContainerStarted","Data":"ce7fcca4b46a1b92148a8a9dc3efa5c0f4e7f2027577996a4bebb03f7f0a3f85"} Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.917414 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0119f9f6-2582-47b7-a5a0-cfd393da9234","Type":"ContainerStarted","Data":"22e24249dd7e327295a359a68487becf2fa9664f7253875ed87bc97d1ccab2a1"} Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.919289 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.819224073 podStartE2EDuration="3.919272417s" podCreationTimestamp="2025-11-25 20:25:47 +0000 UTC" firstStartedPulling="2025-11-25 20:25:48.608121436 +0000 UTC m=+3130.524483802" lastFinishedPulling="2025-11-25 20:25:49.70816978 +0000 UTC m=+3131.624532146" observedRunningTime="2025-11-25 20:25:50.91682032 +0000 UTC m=+3132.833182686" watchObservedRunningTime="2025-11-25 20:25:50.919272417 +0000 UTC m=+3132.835634783" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.921035 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6f1f978-0027-4119-8469-5acf67c75746-horizon-secret-key\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.921155 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b","Type":"ContainerStarted","Data":"9f4e040311c3218759539ab27a3ea4394813bc87f937c1749136699659892ddc"} Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.922732 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47","Type":"ContainerStarted","Data":"c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91"} Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.950853 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqhtn\" (UniqueName: \"kubernetes.io/projected/d6f1f978-0027-4119-8469-5acf67c75746-kube-api-access-jqhtn\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.952351 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f1f978-0027-4119-8469-5acf67c75746-combined-ca-bundle\") pod \"horizon-77ddd59696-rlw9m\" (UID: \"d6f1f978-0027-4119-8469-5acf67c75746\") " pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:50 crc kubenswrapper[4775]: I1125 20:25:50.982929 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.036825055 podStartE2EDuration="3.982906442s" podCreationTimestamp="2025-11-25 20:25:47 +0000 UTC" firstStartedPulling="2025-11-25 20:25:48.763503391 +0000 UTC m=+3130.679865747" lastFinishedPulling="2025-11-25 20:25:49.709584768 +0000 UTC m=+3131.625947134" observedRunningTime="2025-11-25 20:25:50.961169153 +0000 UTC m=+3132.877531539" watchObservedRunningTime="2025-11-25 20:25:50.982906442 +0000 UTC m=+3132.899268808" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.013141 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.070022 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.244256 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-lxq6k" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.308910 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xgn5\" (UniqueName: \"kubernetes.io/projected/72b57b64-f5b1-4f21-aafc-54a9ca0c1faa-kube-api-access-5xgn5\") pod \"72b57b64-f5b1-4f21-aafc-54a9ca0c1faa\" (UID: \"72b57b64-f5b1-4f21-aafc-54a9ca0c1faa\") " Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.309308 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72b57b64-f5b1-4f21-aafc-54a9ca0c1faa-operator-scripts\") pod \"72b57b64-f5b1-4f21-aafc-54a9ca0c1faa\" (UID: \"72b57b64-f5b1-4f21-aafc-54a9ca0c1faa\") " Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.310434 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72b57b64-f5b1-4f21-aafc-54a9ca0c1faa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72b57b64-f5b1-4f21-aafc-54a9ca0c1faa" (UID: "72b57b64-f5b1-4f21-aafc-54a9ca0c1faa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.322879 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72b57b64-f5b1-4f21-aafc-54a9ca0c1faa-kube-api-access-5xgn5" (OuterVolumeSpecName: "kube-api-access-5xgn5") pod "72b57b64-f5b1-4f21-aafc-54a9ca0c1faa" (UID: "72b57b64-f5b1-4f21-aafc-54a9ca0c1faa"). InnerVolumeSpecName "kube-api-access-5xgn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.412443 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72b57b64-f5b1-4f21-aafc-54a9ca0c1faa-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.412471 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xgn5\" (UniqueName: \"kubernetes.io/projected/72b57b64-f5b1-4f21-aafc-54a9ca0c1faa-kube-api-access-5xgn5\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.448132 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-4d88-account-create-update-v4ctp" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.514377 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxlvd\" (UniqueName: \"kubernetes.io/projected/cb30cfa2-754e-47a1-8dad-08d8ebe919a2-kube-api-access-bxlvd\") pod \"cb30cfa2-754e-47a1-8dad-08d8ebe919a2\" (UID: \"cb30cfa2-754e-47a1-8dad-08d8ebe919a2\") " Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.514825 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb30cfa2-754e-47a1-8dad-08d8ebe919a2-operator-scripts\") pod \"cb30cfa2-754e-47a1-8dad-08d8ebe919a2\" (UID: \"cb30cfa2-754e-47a1-8dad-08d8ebe919a2\") " Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.516423 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb30cfa2-754e-47a1-8dad-08d8ebe919a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb30cfa2-754e-47a1-8dad-08d8ebe919a2" (UID: "cb30cfa2-754e-47a1-8dad-08d8ebe919a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.521772 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb30cfa2-754e-47a1-8dad-08d8ebe919a2-kube-api-access-bxlvd" (OuterVolumeSpecName: "kube-api-access-bxlvd") pod "cb30cfa2-754e-47a1-8dad-08d8ebe919a2" (UID: "cb30cfa2-754e-47a1-8dad-08d8ebe919a2"). InnerVolumeSpecName "kube-api-access-bxlvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.522777 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77ddd59696-rlw9m"] Nov 25 20:25:51 crc kubenswrapper[4775]: W1125 20:25:51.530679 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6f1f978_0027_4119_8469_5acf67c75746.slice/crio-6376129feeb14d59dd2d9f0359b6762c24f4e2099d2a158617e9b7a84dc777cf WatchSource:0}: Error finding container 6376129feeb14d59dd2d9f0359b6762c24f4e2099d2a158617e9b7a84dc777cf: Status 404 returned error can't find the container with id 6376129feeb14d59dd2d9f0359b6762c24f4e2099d2a158617e9b7a84dc777cf Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.617118 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxlvd\" (UniqueName: \"kubernetes.io/projected/cb30cfa2-754e-47a1-8dad-08d8ebe919a2-kube-api-access-bxlvd\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.617146 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb30cfa2-754e-47a1-8dad-08d8ebe919a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.640671 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7679659b64-d62zj"] Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.942022 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-lxq6k" event={"ID":"72b57b64-f5b1-4f21-aafc-54a9ca0c1faa","Type":"ContainerDied","Data":"9dccf7e99cbac552f81e535e9034c9f34e0085dc23181626a441def6cec55886"} Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.942324 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dccf7e99cbac552f81e535e9034c9f34e0085dc23181626a441def6cec55886" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.942403 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-lxq6k" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.946750 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b","Type":"ContainerStarted","Data":"3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062"} Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.952559 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-4d88-account-create-update-v4ctp" event={"ID":"cb30cfa2-754e-47a1-8dad-08d8ebe919a2","Type":"ContainerDied","Data":"91d3d5a84c30d1a11502777fd267feee99dcbc4e5641d381a1d064f108132b7f"} Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.952932 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d3d5a84c30d1a11502777fd267feee99dcbc4e5641d381a1d064f108132b7f" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.953049 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-4d88-account-create-update-v4ctp" Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.958224 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47","Type":"ContainerStarted","Data":"0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5"} Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.958376 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" containerName="glance-log" containerID="cri-o://c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91" gracePeriod=30 Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.958772 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" containerName="glance-httpd" containerID="cri-o://0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5" gracePeriod=30 Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.962764 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77ddd59696-rlw9m" event={"ID":"d6f1f978-0027-4119-8469-5acf67c75746","Type":"ContainerStarted","Data":"6376129feeb14d59dd2d9f0359b6762c24f4e2099d2a158617e9b7a84dc777cf"} Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.983017 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7679659b64-d62zj" event={"ID":"e5501671-0373-42f3-b08b-2b0c4c6049fa","Type":"ContainerStarted","Data":"d1bd08058a0bfd28cc3a65e0b1d4b8a3638642a58d4f077cafae856d3425a56f"} Nov 25 20:25:51 crc kubenswrapper[4775]: I1125 20:25:51.993798 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.993766437 podStartE2EDuration="3.993766437s" podCreationTimestamp="2025-11-25 20:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:25:51.983462917 +0000 UTC m=+3133.899825283" watchObservedRunningTime="2025-11-25 20:25:51.993766437 +0000 UTC m=+3133.910128803" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.666870 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.710823 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.771384 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.798173 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-logs\") pod \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.798232 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-httpd-run\") pod \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.798299 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-scripts\") pod \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.798338 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-ceph\") pod \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.799132 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" (UID: "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.799201 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-logs" (OuterVolumeSpecName: "logs") pod "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" (UID: "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.799614 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-public-tls-certs\") pod \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.799638 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-combined-ca-bundle\") pod \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.799700 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s45zh\" (UniqueName: \"kubernetes.io/projected/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-kube-api-access-s45zh\") pod \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.799722 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.799752 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-config-data\") pod \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\" (UID: \"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47\") " Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.800208 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.800224 4775 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.804613 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-scripts" (OuterVolumeSpecName: "scripts") pod "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" (UID: "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.804904 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-kube-api-access-s45zh" (OuterVolumeSpecName: "kube-api-access-s45zh") pod "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" (UID: "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47"). InnerVolumeSpecName "kube-api-access-s45zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.808428 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-ceph" (OuterVolumeSpecName: "ceph") pod "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" (UID: "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.819540 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" (UID: "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.826840 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" (UID: "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.851851 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" (UID: "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.858099 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-config-data" (OuterVolumeSpecName: "config-data") pod "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" (UID: "e260fa6a-5df2-4db3-8d74-11a9dbe5bd47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.902074 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.902106 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.902117 4775 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.902130 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.902141 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s45zh\" (UniqueName: \"kubernetes.io/projected/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-kube-api-access-s45zh\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.902172 4775 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.902181 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:52 crc kubenswrapper[4775]: I1125 20:25:52.923547 4775 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.001103 4775 generic.go:334] "Generic (PLEG): container finished" podID="e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" containerID="0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5" exitCode=0 Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.001128 4775 generic.go:334] "Generic (PLEG): container finished" podID="e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" containerID="c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91" exitCode=143 Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.001208 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.001941 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47","Type":"ContainerDied","Data":"0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5"} Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.001964 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47","Type":"ContainerDied","Data":"c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91"} Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.001974 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e260fa6a-5df2-4db3-8d74-11a9dbe5bd47","Type":"ContainerDied","Data":"fd8f2e9166fd08a4ef364863a1ab8cd6604d08122615dd6c56d7235267d8492d"} Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.001987 4775 scope.go:117] "RemoveContainer" containerID="0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.005194 4775 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.008582 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" containerName="glance-log" containerID="cri-o://3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062" gracePeriod=30 Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.009277 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" containerName="glance-httpd" containerID="cri-o://7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8" gracePeriod=30 Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.009694 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b","Type":"ContainerStarted","Data":"7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8"} Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.037702 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.056233 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.068424 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:25:53 crc kubenswrapper[4775]: E1125 20:25:53.068994 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb30cfa2-754e-47a1-8dad-08d8ebe919a2" containerName="mariadb-account-create-update" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.069015 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb30cfa2-754e-47a1-8dad-08d8ebe919a2" containerName="mariadb-account-create-update" Nov 25 20:25:53 crc kubenswrapper[4775]: E1125 20:25:53.069036 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" containerName="glance-log" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.069044 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" containerName="glance-log" Nov 25 20:25:53 crc kubenswrapper[4775]: E1125 20:25:53.069076 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" containerName="glance-httpd" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.069081 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" containerName="glance-httpd" Nov 25 20:25:53 crc kubenswrapper[4775]: E1125 20:25:53.069104 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72b57b64-f5b1-4f21-aafc-54a9ca0c1faa" containerName="mariadb-database-create" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.069110 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="72b57b64-f5b1-4f21-aafc-54a9ca0c1faa" containerName="mariadb-database-create" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.069321 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" containerName="glance-httpd" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.069334 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb30cfa2-754e-47a1-8dad-08d8ebe919a2" containerName="mariadb-account-create-update" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.069343 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="72b57b64-f5b1-4f21-aafc-54a9ca0c1faa" containerName="mariadb-database-create" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.069352 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" containerName="glance-log" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.070459 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.072299 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.072729 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.081885 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.081868076 podStartE2EDuration="5.081868076s" podCreationTimestamp="2025-11-25 20:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:25:53.047018391 +0000 UTC m=+3134.963380757" watchObservedRunningTime="2025-11-25 20:25:53.081868076 +0000 UTC m=+3134.998230442" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.084226 4775 scope.go:117] "RemoveContainer" containerID="c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.113661 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.134178 4775 scope.go:117] "RemoveContainer" containerID="0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5" Nov 25 20:25:53 crc kubenswrapper[4775]: E1125 20:25:53.138014 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5\": container with ID starting with 0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5 not found: ID does not exist" containerID="0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.138068 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5"} err="failed to get container status \"0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5\": rpc error: code = NotFound desc = could not find container \"0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5\": container with ID starting with 0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5 not found: ID does not exist" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.138100 4775 scope.go:117] "RemoveContainer" containerID="c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91" Nov 25 20:25:53 crc kubenswrapper[4775]: E1125 20:25:53.138439 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91\": container with ID starting with c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91 not found: ID does not exist" containerID="c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.138477 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91"} err="failed to get container status \"c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91\": rpc error: code = NotFound desc = could not find container \"c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91\": container with ID starting with c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91 not found: ID does not exist" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.138502 4775 scope.go:117] "RemoveContainer" containerID="0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.138906 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5"} err="failed to get container status \"0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5\": rpc error: code = NotFound desc = could not find container \"0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5\": container with ID starting with 0c8a4e139f220ecfd1c606c24328cf18851c9851821b5fb505512904d365a9f5 not found: ID does not exist" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.138929 4775 scope.go:117] "RemoveContainer" containerID="c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.139803 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91"} err="failed to get container status \"c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91\": rpc error: code = NotFound desc = could not find container \"c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91\": container with ID starting with c7660eb90f884d8aea11dc8ecd6b3bad295158bbfd79a19e27cad92c74ec7f91 not found: ID does not exist" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.225465 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhfb8\" (UniqueName: \"kubernetes.io/projected/9ff9ed95-35ed-4232-b848-8d85332bcb8f-kube-api-access-rhfb8\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.225525 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff9ed95-35ed-4232-b848-8d85332bcb8f-logs\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.225662 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9ff9ed95-35ed-4232-b848-8d85332bcb8f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.225734 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff9ed95-35ed-4232-b848-8d85332bcb8f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.225772 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ff9ed95-35ed-4232-b848-8d85332bcb8f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.225874 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff9ed95-35ed-4232-b848-8d85332bcb8f-config-data\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.225903 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ff9ed95-35ed-4232-b848-8d85332bcb8f-scripts\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.225930 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9ff9ed95-35ed-4232-b848-8d85332bcb8f-ceph\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.225957 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.327444 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff9ed95-35ed-4232-b848-8d85332bcb8f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.327494 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ff9ed95-35ed-4232-b848-8d85332bcb8f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.327564 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff9ed95-35ed-4232-b848-8d85332bcb8f-config-data\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.327582 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ff9ed95-35ed-4232-b848-8d85332bcb8f-scripts\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.327603 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9ff9ed95-35ed-4232-b848-8d85332bcb8f-ceph\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.327628 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.327664 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhfb8\" (UniqueName: \"kubernetes.io/projected/9ff9ed95-35ed-4232-b848-8d85332bcb8f-kube-api-access-rhfb8\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.327683 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff9ed95-35ed-4232-b848-8d85332bcb8f-logs\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.327742 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9ff9ed95-35ed-4232-b848-8d85332bcb8f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.327944 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.328249 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9ff9ed95-35ed-4232-b848-8d85332bcb8f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.328722 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff9ed95-35ed-4232-b848-8d85332bcb8f-logs\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.333637 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff9ed95-35ed-4232-b848-8d85332bcb8f-config-data\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.336562 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ff9ed95-35ed-4232-b848-8d85332bcb8f-scripts\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.337358 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ff9ed95-35ed-4232-b848-8d85332bcb8f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.338036 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff9ed95-35ed-4232-b848-8d85332bcb8f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.347404 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9ff9ed95-35ed-4232-b848-8d85332bcb8f-ceph\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.353432 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhfb8\" (UniqueName: \"kubernetes.io/projected/9ff9ed95-35ed-4232-b848-8d85332bcb8f-kube-api-access-rhfb8\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.369030 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"9ff9ed95-35ed-4232-b848-8d85332bcb8f\") " pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.394794 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.557369 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-mx4vt"] Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.559083 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.566883 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-rqgf4" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.566941 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.569554 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-mx4vt"] Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.739373 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-config-data\") pod \"manila-db-sync-mx4vt\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.739425 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-job-config-data\") pod \"manila-db-sync-mx4vt\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.739543 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-combined-ca-bundle\") pod \"manila-db-sync-mx4vt\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.739634 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zs52\" (UniqueName: \"kubernetes.io/projected/6a582aec-b4ff-44a8-b217-9079392a5c8f-kube-api-access-7zs52\") pod \"manila-db-sync-mx4vt\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.824083 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.842567 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-combined-ca-bundle\") pod \"manila-db-sync-mx4vt\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.842740 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zs52\" (UniqueName: \"kubernetes.io/projected/6a582aec-b4ff-44a8-b217-9079392a5c8f-kube-api-access-7zs52\") pod \"manila-db-sync-mx4vt\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.842781 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-config-data\") pod \"manila-db-sync-mx4vt\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.842799 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-job-config-data\") pod \"manila-db-sync-mx4vt\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.850762 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-job-config-data\") pod \"manila-db-sync-mx4vt\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.863489 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-combined-ca-bundle\") pod \"manila-db-sync-mx4vt\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.863680 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zs52\" (UniqueName: \"kubernetes.io/projected/6a582aec-b4ff-44a8-b217-9079392a5c8f-kube-api-access-7zs52\") pod \"manila-db-sync-mx4vt\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.864320 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-config-data\") pod \"manila-db-sync-mx4vt\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.888909 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mx4vt" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.943486 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-scripts\") pod \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.943549 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-logs\") pod \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.943573 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qmjw\" (UniqueName: \"kubernetes.io/projected/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-kube-api-access-2qmjw\") pod \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.943588 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-combined-ca-bundle\") pod \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.943611 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-httpd-run\") pod \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.943644 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-internal-tls-certs\") pod \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.943673 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.943717 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-ceph\") pod \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.943836 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-config-data\") pod \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\" (UID: \"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b\") " Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.944362 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-logs" (OuterVolumeSpecName: "logs") pod "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" (UID: "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.944377 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" (UID: "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.947859 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.947882 4775 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.948017 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-scripts" (OuterVolumeSpecName: "scripts") pod "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" (UID: "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.964307 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-kube-api-access-2qmjw" (OuterVolumeSpecName: "kube-api-access-2qmjw") pod "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" (UID: "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b"). InnerVolumeSpecName "kube-api-access-2qmjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.964406 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" (UID: "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.975369 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-ceph" (OuterVolumeSpecName: "ceph") pod "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" (UID: "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:25:53 crc kubenswrapper[4775]: I1125 20:25:53.984941 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" (UID: "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.043244 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-config-data" (OuterVolumeSpecName: "config-data") pod "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" (UID: "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.045299 4775 generic.go:334] "Generic (PLEG): container finished" podID="3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" containerID="7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8" exitCode=0 Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.045325 4775 generic.go:334] "Generic (PLEG): container finished" podID="3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" containerID="3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062" exitCode=143 Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.045423 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b","Type":"ContainerDied","Data":"7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8"} Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.045454 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b","Type":"ContainerDied","Data":"3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062"} Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.045465 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b","Type":"ContainerDied","Data":"9f4e040311c3218759539ab27a3ea4394813bc87f937c1749136699659892ddc"} Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.045480 4775 scope.go:117] "RemoveContainer" containerID="7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.045610 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.046794 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" (UID: "3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.049357 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.049379 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qmjw\" (UniqueName: \"kubernetes.io/projected/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-kube-api-access-2qmjw\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.049391 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.049401 4775 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.049429 4775 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.049439 4775 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.049447 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.077916 4775 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.089415 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.108822 4775 scope.go:117] "RemoveContainer" containerID="3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062" Nov 25 20:25:54 crc kubenswrapper[4775]: W1125 20:25:54.117079 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ff9ed95_35ed_4232_b848_8d85332bcb8f.slice/crio-0caa5a29bcf05433352563b401f0642593293251674680a402e57b8b4b6047fd WatchSource:0}: Error finding container 0caa5a29bcf05433352563b401f0642593293251674680a402e57b8b4b6047fd: Status 404 returned error can't find the container with id 0caa5a29bcf05433352563b401f0642593293251674680a402e57b8b4b6047fd Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.128861 4775 scope.go:117] "RemoveContainer" containerID="7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8" Nov 25 20:25:54 crc kubenswrapper[4775]: E1125 20:25:54.129669 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8\": container with ID starting with 7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8 not found: ID does not exist" containerID="7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.129721 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8"} err="failed to get container status \"7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8\": rpc error: code = NotFound desc = could not find container \"7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8\": container with ID starting with 7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8 not found: ID does not exist" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.129756 4775 scope.go:117] "RemoveContainer" containerID="3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062" Nov 25 20:25:54 crc kubenswrapper[4775]: E1125 20:25:54.132631 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062\": container with ID starting with 3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062 not found: ID does not exist" containerID="3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.132693 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062"} err="failed to get container status \"3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062\": rpc error: code = NotFound desc = could not find container \"3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062\": container with ID starting with 3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062 not found: ID does not exist" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.132717 4775 scope.go:117] "RemoveContainer" containerID="7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.134145 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8"} err="failed to get container status \"7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8\": rpc error: code = NotFound desc = could not find container \"7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8\": container with ID starting with 7b07ec24c37c817e9144e78057d43a317e3a54f49756b75e5422fcc202cb49a8 not found: ID does not exist" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.134178 4775 scope.go:117] "RemoveContainer" containerID="3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.136429 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062"} err="failed to get container status \"3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062\": rpc error: code = NotFound desc = could not find container \"3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062\": container with ID starting with 3d4ea4f0bba37d7b94396e0713488454474feef6e26b39200ab95f65a3404062 not found: ID does not exist" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.151117 4775 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.384435 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.415437 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.427959 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:25:54 crc kubenswrapper[4775]: E1125 20:25:54.428403 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" containerName="glance-log" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.428420 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" containerName="glance-log" Nov 25 20:25:54 crc kubenswrapper[4775]: E1125 20:25:54.428444 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" containerName="glance-httpd" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.428450 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" containerName="glance-httpd" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.428623 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" containerName="glance-httpd" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.428683 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" containerName="glance-log" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.429680 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.433717 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.433898 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.458667 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:25:54 crc kubenswrapper[4775]: W1125 20:25:54.470016 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a582aec_b4ff_44a8_b217_9079392a5c8f.slice/crio-caa4e95a1b065dfce4e34ecb574dbd8d7854ceb216e284c9585385c858d6bf4d WatchSource:0}: Error finding container caa4e95a1b065dfce4e34ecb574dbd8d7854ceb216e284c9585385c858d6bf4d: Status 404 returned error can't find the container with id caa4e95a1b065dfce4e34ecb574dbd8d7854ceb216e284c9585385c858d6bf4d Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.479297 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-mx4vt"] Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.561318 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fc4e051-af7c-469f-945c-46162d1aceb2-logs\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.562020 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.562069 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fc4e051-af7c-469f-945c-46162d1aceb2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.562097 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc4e051-af7c-469f-945c-46162d1aceb2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.562176 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fc4e051-af7c-469f-945c-46162d1aceb2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.562197 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mzr5\" (UniqueName: \"kubernetes.io/projected/4fc4e051-af7c-469f-945c-46162d1aceb2-kube-api-access-9mzr5\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.562250 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fc4e051-af7c-469f-945c-46162d1aceb2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.562270 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fc4e051-af7c-469f-945c-46162d1aceb2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.562295 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4fc4e051-af7c-469f-945c-46162d1aceb2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.664687 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fc4e051-af7c-469f-945c-46162d1aceb2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.664735 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mzr5\" (UniqueName: \"kubernetes.io/projected/4fc4e051-af7c-469f-945c-46162d1aceb2-kube-api-access-9mzr5\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.664807 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fc4e051-af7c-469f-945c-46162d1aceb2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.664829 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fc4e051-af7c-469f-945c-46162d1aceb2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.664856 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4fc4e051-af7c-469f-945c-46162d1aceb2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.664899 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fc4e051-af7c-469f-945c-46162d1aceb2-logs\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.664918 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.664941 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fc4e051-af7c-469f-945c-46162d1aceb2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.664967 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc4e051-af7c-469f-945c-46162d1aceb2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.666427 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fc4e051-af7c-469f-945c-46162d1aceb2-logs\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.666559 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.666665 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fc4e051-af7c-469f-945c-46162d1aceb2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.671363 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc4e051-af7c-469f-945c-46162d1aceb2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.673077 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fc4e051-af7c-469f-945c-46162d1aceb2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.673374 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fc4e051-af7c-469f-945c-46162d1aceb2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.675335 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4fc4e051-af7c-469f-945c-46162d1aceb2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.678418 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fc4e051-af7c-469f-945c-46162d1aceb2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.695272 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mzr5\" (UniqueName: \"kubernetes.io/projected/4fc4e051-af7c-469f-945c-46162d1aceb2-kube-api-access-9mzr5\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.705334 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4fc4e051-af7c-469f-945c-46162d1aceb2\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.757282 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.860620 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b" path="/var/lib/kubelet/pods/3feaa9a3-7a4c-4bf1-a5ec-5abaa6c6493b/volumes" Nov 25 20:25:54 crc kubenswrapper[4775]: I1125 20:25:54.861520 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e260fa6a-5df2-4db3-8d74-11a9dbe5bd47" path="/var/lib/kubelet/pods/e260fa6a-5df2-4db3-8d74-11a9dbe5bd47/volumes" Nov 25 20:25:55 crc kubenswrapper[4775]: I1125 20:25:55.103449 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mx4vt" event={"ID":"6a582aec-b4ff-44a8-b217-9079392a5c8f","Type":"ContainerStarted","Data":"caa4e95a1b065dfce4e34ecb574dbd8d7854ceb216e284c9585385c858d6bf4d"} Nov 25 20:25:55 crc kubenswrapper[4775]: I1125 20:25:55.130787 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9ff9ed95-35ed-4232-b848-8d85332bcb8f","Type":"ContainerStarted","Data":"b3f14851a0c5e6877281f764d526fb44f0d8db918589dcf49dea5df64fffa0cf"} Nov 25 20:25:55 crc kubenswrapper[4775]: I1125 20:25:55.130827 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9ff9ed95-35ed-4232-b848-8d85332bcb8f","Type":"ContainerStarted","Data":"0caa5a29bcf05433352563b401f0642593293251674680a402e57b8b4b6047fd"} Nov 25 20:25:55 crc kubenswrapper[4775]: I1125 20:25:55.281579 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:25:55 crc kubenswrapper[4775]: I1125 20:25:55.846760 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:25:55 crc kubenswrapper[4775]: E1125 20:25:55.847148 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:25:57 crc kubenswrapper[4775]: I1125 20:25:57.923473 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Nov 25 20:25:57 crc kubenswrapper[4775]: I1125 20:25:57.960423 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Nov 25 20:26:00 crc kubenswrapper[4775]: I1125 20:26:00.176691 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fc4e051-af7c-469f-945c-46162d1aceb2","Type":"ContainerStarted","Data":"b56f9c20d1db99aa538c7a93f1811bd935f9613ffae4b520f30aa2d50cd9ed09"} Nov 25 20:26:02 crc kubenswrapper[4775]: I1125 20:26:02.245862 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9ff9ed95-35ed-4232-b848-8d85332bcb8f","Type":"ContainerStarted","Data":"7196b88f42dc7a75697315b80adc8afe789aa784197b25958bb37d6ca83d9050"} Nov 25 20:26:02 crc kubenswrapper[4775]: I1125 20:26:02.267030 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fc4e051-af7c-469f-945c-46162d1aceb2","Type":"ContainerStarted","Data":"50a9d0df9716ac345fccfd961ce7062ba81ec303b6d3c134665279de681c12d2"} Nov 25 20:26:02 crc kubenswrapper[4775]: I1125 20:26:02.296216 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.296199799 podStartE2EDuration="9.296199799s" podCreationTimestamp="2025-11-25 20:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:26:02.282253061 +0000 UTC m=+3144.198615427" watchObservedRunningTime="2025-11-25 20:26:02.296199799 +0000 UTC m=+3144.212562165" Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.278164 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7679659b64-d62zj" event={"ID":"e5501671-0373-42f3-b08b-2b0c4c6049fa","Type":"ContainerStarted","Data":"b9d708d07b2ec49229a301a9014ac54030d8fcb46dfc5a8aad31f38099fe035a"} Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.278495 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7679659b64-d62zj" event={"ID":"e5501671-0373-42f3-b08b-2b0c4c6049fa","Type":"ContainerStarted","Data":"a7430b9a332a13b841b717601654c786053188721ad1669e68dba41054cda09a"} Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.283219 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bf8598cd5-69z2f" event={"ID":"f540d713-b2ba-459b-84b8-714fe08f05ac","Type":"ContainerStarted","Data":"8f107b52e4d16da9c983af7c5620637bad2de0e4c67b79c76e6397460bd3fd38"} Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.283261 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bf8598cd5-69z2f" event={"ID":"f540d713-b2ba-459b-84b8-714fe08f05ac","Type":"ContainerStarted","Data":"7bc94d703a40fff0c91617154a41340f91d66d4ede7f8230ae321f81227a4137"} Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.283285 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6bf8598cd5-69z2f" podUID="f540d713-b2ba-459b-84b8-714fe08f05ac" containerName="horizon-log" containerID="cri-o://7bc94d703a40fff0c91617154a41340f91d66d4ede7f8230ae321f81227a4137" gracePeriod=30 Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.283313 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6bf8598cd5-69z2f" podUID="f540d713-b2ba-459b-84b8-714fe08f05ac" containerName="horizon" containerID="cri-o://8f107b52e4d16da9c983af7c5620637bad2de0e4c67b79c76e6397460bd3fd38" gracePeriod=30 Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.285246 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fc4e051-af7c-469f-945c-46162d1aceb2","Type":"ContainerStarted","Data":"c7b42beaacef61861b602ffb42cb3e0bd3ec30471d8fc02441528cb230a3aa26"} Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.286605 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77ddd59696-rlw9m" event={"ID":"d6f1f978-0027-4119-8469-5acf67c75746","Type":"ContainerStarted","Data":"138471395d057c6142a3749c0efe9644b810099cfac777fe61f0d1c9f3d941fb"} Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.286975 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77ddd59696-rlw9m" event={"ID":"d6f1f978-0027-4119-8469-5acf67c75746","Type":"ContainerStarted","Data":"ca60aa0297afc72a451d4f269f74b7a446547bd8e3de1a68c2d75ac7faf44a57"} Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.289353 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8695b6d995-cnfpw" event={"ID":"8d1939db-0c4f-45b0-9b3b-3d91590a9730","Type":"ContainerStarted","Data":"eec297159506a66bf6b0a1319810d4ec126d7a36d2a1967cd062c2b7739dbf78"} Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.289387 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8695b6d995-cnfpw" event={"ID":"8d1939db-0c4f-45b0-9b3b-3d91590a9730","Type":"ContainerStarted","Data":"9dd2d3db45f3f05ebbf876a0b624ff4bee34b929ccef26936164793eabe46afc"} Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.289407 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8695b6d995-cnfpw" podUID="8d1939db-0c4f-45b0-9b3b-3d91590a9730" containerName="horizon-log" containerID="cri-o://9dd2d3db45f3f05ebbf876a0b624ff4bee34b929ccef26936164793eabe46afc" gracePeriod=30 Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.289461 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8695b6d995-cnfpw" podUID="8d1939db-0c4f-45b0-9b3b-3d91590a9730" containerName="horizon" containerID="cri-o://eec297159506a66bf6b0a1319810d4ec126d7a36d2a1967cd062c2b7739dbf78" gracePeriod=30 Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.310236 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7679659b64-d62zj" podStartSLOduration=3.072461881 podStartE2EDuration="13.310216119s" podCreationTimestamp="2025-11-25 20:25:50 +0000 UTC" firstStartedPulling="2025-11-25 20:25:51.690923941 +0000 UTC m=+3133.607286307" lastFinishedPulling="2025-11-25 20:26:01.928678179 +0000 UTC m=+3143.845040545" observedRunningTime="2025-11-25 20:26:03.30101613 +0000 UTC m=+3145.217378486" watchObservedRunningTime="2025-11-25 20:26:03.310216119 +0000 UTC m=+3145.226578485" Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.340662 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-77ddd59696-rlw9m" podStartSLOduration=2.805916601 podStartE2EDuration="13.340630075s" podCreationTimestamp="2025-11-25 20:25:50 +0000 UTC" firstStartedPulling="2025-11-25 20:25:51.544827537 +0000 UTC m=+3133.461189903" lastFinishedPulling="2025-11-25 20:26:02.079541021 +0000 UTC m=+3143.995903377" observedRunningTime="2025-11-25 20:26:03.322526093 +0000 UTC m=+3145.238888459" watchObservedRunningTime="2025-11-25 20:26:03.340630075 +0000 UTC m=+3145.256992441" Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.345664 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6bf8598cd5-69z2f" podStartSLOduration=2.628683923 podStartE2EDuration="15.345636131s" podCreationTimestamp="2025-11-25 20:25:48 +0000 UTC" firstStartedPulling="2025-11-25 20:25:49.294160198 +0000 UTC m=+3131.210522564" lastFinishedPulling="2025-11-25 20:26:02.011112396 +0000 UTC m=+3143.927474772" observedRunningTime="2025-11-25 20:26:03.336993407 +0000 UTC m=+3145.253355773" watchObservedRunningTime="2025-11-25 20:26:03.345636131 +0000 UTC m=+3145.261998497" Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.363830 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8695b6d995-cnfpw" podStartSLOduration=2.445956226 podStartE2EDuration="15.363813694s" podCreationTimestamp="2025-11-25 20:25:48 +0000 UTC" firstStartedPulling="2025-11-25 20:25:49.004109409 +0000 UTC m=+3130.920471775" lastFinishedPulling="2025-11-25 20:26:01.921966867 +0000 UTC m=+3143.838329243" observedRunningTime="2025-11-25 20:26:03.35482467 +0000 UTC m=+3145.271187056" watchObservedRunningTime="2025-11-25 20:26:03.363813694 +0000 UTC m=+3145.280176060" Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.372219 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.372204332 podStartE2EDuration="9.372204332s" podCreationTimestamp="2025-11-25 20:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:26:03.37142122 +0000 UTC m=+3145.287783586" watchObservedRunningTime="2025-11-25 20:26:03.372204332 +0000 UTC m=+3145.288566688" Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.395416 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.395475 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.438749 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 20:26:03 crc kubenswrapper[4775]: I1125 20:26:03.440301 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 20:26:04 crc kubenswrapper[4775]: I1125 20:26:04.302231 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 20:26:04 crc kubenswrapper[4775]: I1125 20:26:04.302832 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 20:26:04 crc kubenswrapper[4775]: I1125 20:26:04.757674 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 20:26:04 crc kubenswrapper[4775]: I1125 20:26:04.757715 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 20:26:04 crc kubenswrapper[4775]: I1125 20:26:04.814307 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 20:26:04 crc kubenswrapper[4775]: I1125 20:26:04.825133 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 20:26:05 crc kubenswrapper[4775]: I1125 20:26:05.306391 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 20:26:05 crc kubenswrapper[4775]: I1125 20:26:05.307009 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 20:26:06 crc kubenswrapper[4775]: I1125 20:26:06.315980 4775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 20:26:06 crc kubenswrapper[4775]: I1125 20:26:06.534413 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 20:26:06 crc kubenswrapper[4775]: I1125 20:26:06.539340 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 20:26:08 crc kubenswrapper[4775]: I1125 20:26:08.397739 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:26:08 crc kubenswrapper[4775]: I1125 20:26:08.477369 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:26:08 crc kubenswrapper[4775]: I1125 20:26:08.486183 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 20:26:08 crc kubenswrapper[4775]: I1125 20:26:08.854476 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:26:08 crc kubenswrapper[4775]: E1125 20:26:08.854930 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:26:09 crc kubenswrapper[4775]: I1125 20:26:09.347219 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mx4vt" event={"ID":"6a582aec-b4ff-44a8-b217-9079392a5c8f","Type":"ContainerStarted","Data":"06f7fd2563e337203ec45d89054266bda6450d4f9c700022be857b2d6a2eb709"} Nov 25 20:26:09 crc kubenswrapper[4775]: I1125 20:26:09.566014 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 20:26:09 crc kubenswrapper[4775]: I1125 20:26:09.601059 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-mx4vt" podStartSLOduration=2.771498884 podStartE2EDuration="16.601037348s" podCreationTimestamp="2025-11-25 20:25:53 +0000 UTC" firstStartedPulling="2025-11-25 20:25:54.472589146 +0000 UTC m=+3136.388951512" lastFinishedPulling="2025-11-25 20:26:08.30212761 +0000 UTC m=+3150.218489976" observedRunningTime="2025-11-25 20:26:09.371527241 +0000 UTC m=+3151.287889607" watchObservedRunningTime="2025-11-25 20:26:09.601037348 +0000 UTC m=+3151.517399714" Nov 25 20:26:11 crc kubenswrapper[4775]: I1125 20:26:11.014184 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:26:11 crc kubenswrapper[4775]: I1125 20:26:11.014465 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:26:11 crc kubenswrapper[4775]: I1125 20:26:11.071110 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:26:11 crc kubenswrapper[4775]: I1125 20:26:11.071168 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:26:20 crc kubenswrapper[4775]: I1125 20:26:20.450802 4775 generic.go:334] "Generic (PLEG): container finished" podID="6a582aec-b4ff-44a8-b217-9079392a5c8f" containerID="06f7fd2563e337203ec45d89054266bda6450d4f9c700022be857b2d6a2eb709" exitCode=0 Nov 25 20:26:20 crc kubenswrapper[4775]: I1125 20:26:20.450915 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mx4vt" event={"ID":"6a582aec-b4ff-44a8-b217-9079392a5c8f","Type":"ContainerDied","Data":"06f7fd2563e337203ec45d89054266bda6450d4f9c700022be857b2d6a2eb709"} Nov 25 20:26:21 crc kubenswrapper[4775]: I1125 20:26:21.017505 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7679659b64-d62zj" podUID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.236:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.236:8443: connect: connection refused" Nov 25 20:26:21 crc kubenswrapper[4775]: I1125 20:26:21.074227 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-77ddd59696-rlw9m" podUID="d6f1f978-0027-4119-8469-5acf67c75746" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.237:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.237:8443: connect: connection refused" Nov 25 20:26:21 crc kubenswrapper[4775]: I1125 20:26:21.937812 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mx4vt" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.102290 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-combined-ca-bundle\") pod \"6a582aec-b4ff-44a8-b217-9079392a5c8f\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.102427 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-job-config-data\") pod \"6a582aec-b4ff-44a8-b217-9079392a5c8f\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.102712 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-config-data\") pod \"6a582aec-b4ff-44a8-b217-9079392a5c8f\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.102803 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zs52\" (UniqueName: \"kubernetes.io/projected/6a582aec-b4ff-44a8-b217-9079392a5c8f-kube-api-access-7zs52\") pod \"6a582aec-b4ff-44a8-b217-9079392a5c8f\" (UID: \"6a582aec-b4ff-44a8-b217-9079392a5c8f\") " Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.109322 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a582aec-b4ff-44a8-b217-9079392a5c8f-kube-api-access-7zs52" (OuterVolumeSpecName: "kube-api-access-7zs52") pod "6a582aec-b4ff-44a8-b217-9079392a5c8f" (UID: "6a582aec-b4ff-44a8-b217-9079392a5c8f"). InnerVolumeSpecName "kube-api-access-7zs52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.110374 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "6a582aec-b4ff-44a8-b217-9079392a5c8f" (UID: "6a582aec-b4ff-44a8-b217-9079392a5c8f"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.129045 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-config-data" (OuterVolumeSpecName: "config-data") pod "6a582aec-b4ff-44a8-b217-9079392a5c8f" (UID: "6a582aec-b4ff-44a8-b217-9079392a5c8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.142685 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a582aec-b4ff-44a8-b217-9079392a5c8f" (UID: "6a582aec-b4ff-44a8-b217-9079392a5c8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.205494 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.205534 4775 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-job-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.205544 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a582aec-b4ff-44a8-b217-9079392a5c8f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.205578 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zs52\" (UniqueName: \"kubernetes.io/projected/6a582aec-b4ff-44a8-b217-9079392a5c8f-kube-api-access-7zs52\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.482774 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mx4vt" event={"ID":"6a582aec-b4ff-44a8-b217-9079392a5c8f","Type":"ContainerDied","Data":"caa4e95a1b065dfce4e34ecb574dbd8d7854ceb216e284c9585385c858d6bf4d"} Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.483024 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mx4vt" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.483032 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caa4e95a1b065dfce4e34ecb574dbd8d7854ceb216e284c9585385c858d6bf4d" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.772767 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 20:26:22 crc kubenswrapper[4775]: E1125 20:26:22.773314 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a582aec-b4ff-44a8-b217-9079392a5c8f" containerName="manila-db-sync" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.773384 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a582aec-b4ff-44a8-b217-9079392a5c8f" containerName="manila-db-sync" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.773702 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a582aec-b4ff-44a8-b217-9079392a5c8f" containerName="manila-db-sync" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.774803 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.778789 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-rqgf4" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.779222 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.779537 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.779770 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.780986 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.784261 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.789155 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.792082 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.804354 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.848164 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:26:22 crc kubenswrapper[4775]: E1125 20:26:22.848625 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.918745 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzz72\" (UniqueName: \"kubernetes.io/projected/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-kube-api-access-wzz72\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.918799 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.918824 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.918856 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-scripts\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.918874 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-ceph\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.918890 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-config-data\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.918934 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.918953 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.918994 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-scripts\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.919012 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.919030 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-config-data\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.919052 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.919069 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.919093 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvdzt\" (UniqueName: \"kubernetes.io/projected/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-kube-api-access-jvdzt\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.933180 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-gthzc"] Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.934587 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:22 crc kubenswrapper[4775]: I1125 20:26:22.957569 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-gthzc"] Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.020216 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzz72\" (UniqueName: \"kubernetes.io/projected/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-kube-api-access-wzz72\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.020459 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.020584 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.020701 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-scripts\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.020782 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-ceph\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.020875 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-config-data\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.020978 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.021059 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.021170 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-scripts\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.021238 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.021306 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-config-data\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.021391 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.021460 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.021550 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvdzt\" (UniqueName: \"kubernetes.io/projected/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-kube-api-access-jvdzt\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.023616 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.023793 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.027136 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.029621 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.032962 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.035230 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-config-data\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.035892 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-scripts\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.037961 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-ceph\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.039877 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-config-data\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.044302 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.045259 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.045273 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-scripts\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.052110 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvdzt\" (UniqueName: \"kubernetes.io/projected/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-kube-api-access-jvdzt\") pod \"manila-scheduler-0\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.052579 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzz72\" (UniqueName: \"kubernetes.io/projected/0a88473d-4ba5-4147-bf60-128f0b7ea8f6-kube-api-access-wzz72\") pod \"manila-share-share1-0\" (UID: \"0a88473d-4ba5-4147-bf60-128f0b7ea8f6\") " pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.100157 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.101693 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.103755 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.106999 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.111210 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.119878 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.125710 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.125757 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-logs\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.125796 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drgsc\" (UniqueName: \"kubernetes.io/projected/49613d54-e600-4168-8782-66c3fef8b983-kube-api-access-drgsc\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.125844 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzl88\" (UniqueName: \"kubernetes.io/projected/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-kube-api-access-mzl88\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.125881 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.125901 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-config-data-custom\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.125921 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-config-data\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.125947 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.125989 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.126019 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-config\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.126037 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-scripts\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.126096 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.126144 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-etc-machine-id\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.229056 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.229344 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-etc-machine-id\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.229381 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.229429 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-logs\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.229481 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drgsc\" (UniqueName: \"kubernetes.io/projected/49613d54-e600-4168-8782-66c3fef8b983-kube-api-access-drgsc\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.229500 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzl88\" (UniqueName: \"kubernetes.io/projected/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-kube-api-access-mzl88\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.229538 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.229560 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-config-data-custom\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.229579 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-config-data\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.229613 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.229641 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.229682 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-config\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.229697 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-scripts\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.232569 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.232661 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-etc-machine-id\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.233484 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.233944 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-logs\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.234300 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.234911 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-config\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.236587 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49613d54-e600-4168-8782-66c3fef8b983-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.263785 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-config-data-custom\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.264563 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-scripts\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.266813 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-config-data\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.268759 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.270175 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzl88\" (UniqueName: \"kubernetes.io/projected/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-kube-api-access-mzl88\") pod \"manila-api-0\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.279748 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drgsc\" (UniqueName: \"kubernetes.io/projected/49613d54-e600-4168-8782-66c3fef8b983-kube-api-access-drgsc\") pod \"dnsmasq-dns-76b5fdb995-gthzc\" (UID: \"49613d54-e600-4168-8782-66c3fef8b983\") " pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.344234 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.551915 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.552434 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.758843 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 20:26:23 crc kubenswrapper[4775]: W1125 20:26:23.924203 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2223bbbc_0aa4_41cb_9ba8_aab6ca05d6cd.slice/crio-fac3d7883a0f059eda2e47cb0135f5d5995c535e964873de4f181a6db149a346 WatchSource:0}: Error finding container fac3d7883a0f059eda2e47cb0135f5d5995c535e964873de4f181a6db149a346: Status 404 returned error can't find the container with id fac3d7883a0f059eda2e47cb0135f5d5995c535e964873de4f181a6db149a346 Nov 25 20:26:23 crc kubenswrapper[4775]: I1125 20:26:23.927018 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 25 20:26:24 crc kubenswrapper[4775]: I1125 20:26:24.062526 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-gthzc"] Nov 25 20:26:24 crc kubenswrapper[4775]: I1125 20:26:24.541808 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"61e0c56b2657221e7cd8ed63b1f78a1efec77c9366be141df7495fe2cb4591c4"} Nov 25 20:26:24 crc kubenswrapper[4775]: I1125 20:26:24.553727 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"cbd9df5e-6aad-4113-8e66-c831af3b7c5f","Type":"ContainerStarted","Data":"d70b6579bf220fd3fabfd4f8e729d2c49be9cb71cef80037556eb116ff48f12f"} Nov 25 20:26:24 crc kubenswrapper[4775]: I1125 20:26:24.565671 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd","Type":"ContainerStarted","Data":"a4fa684d32d84f96f58976117b596c2070959f12e5ac0d044585e1d81490a3c9"} Nov 25 20:26:24 crc kubenswrapper[4775]: I1125 20:26:24.565734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd","Type":"ContainerStarted","Data":"fac3d7883a0f059eda2e47cb0135f5d5995c535e964873de4f181a6db149a346"} Nov 25 20:26:24 crc kubenswrapper[4775]: I1125 20:26:24.571844 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" event={"ID":"49613d54-e600-4168-8782-66c3fef8b983","Type":"ContainerStarted","Data":"3f20b2c8e9fa93b80e098a62c89fbeaee8ec1a55499ffeeecc83d1a939cbc228"} Nov 25 20:26:24 crc kubenswrapper[4775]: I1125 20:26:24.571883 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" event={"ID":"49613d54-e600-4168-8782-66c3fef8b983","Type":"ContainerStarted","Data":"6e455354ec8274dc893fd4906a3b394f317743021d2106ab300cac7aee3e0d24"} Nov 25 20:26:25 crc kubenswrapper[4775]: I1125 20:26:25.586817 4775 generic.go:334] "Generic (PLEG): container finished" podID="49613d54-e600-4168-8782-66c3fef8b983" containerID="3f20b2c8e9fa93b80e098a62c89fbeaee8ec1a55499ffeeecc83d1a939cbc228" exitCode=0 Nov 25 20:26:25 crc kubenswrapper[4775]: I1125 20:26:25.587016 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" event={"ID":"49613d54-e600-4168-8782-66c3fef8b983","Type":"ContainerDied","Data":"3f20b2c8e9fa93b80e098a62c89fbeaee8ec1a55499ffeeecc83d1a939cbc228"} Nov 25 20:26:25 crc kubenswrapper[4775]: I1125 20:26:25.587349 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" event={"ID":"49613d54-e600-4168-8782-66c3fef8b983","Type":"ContainerStarted","Data":"0422260404c110276042ad5fb690db75429ee950ec6d936e949720e7ad0ca7bb"} Nov 25 20:26:25 crc kubenswrapper[4775]: I1125 20:26:25.587388 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:25 crc kubenswrapper[4775]: I1125 20:26:25.590715 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"cbd9df5e-6aad-4113-8e66-c831af3b7c5f","Type":"ContainerStarted","Data":"f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55"} Nov 25 20:26:25 crc kubenswrapper[4775]: I1125 20:26:25.590759 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"cbd9df5e-6aad-4113-8e66-c831af3b7c5f","Type":"ContainerStarted","Data":"14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a"} Nov 25 20:26:25 crc kubenswrapper[4775]: I1125 20:26:25.596289 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd","Type":"ContainerStarted","Data":"ace1dc6f4800ecc4c4904e81f3674c36845056eed9db0fbe80f74029fd1d8945"} Nov 25 20:26:25 crc kubenswrapper[4775]: I1125 20:26:25.597544 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:26:25 crc kubenswrapper[4775]: I1125 20:26:25.612776 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" podStartSLOduration=3.612756823 podStartE2EDuration="3.612756823s" podCreationTimestamp="2025-11-25 20:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:26:25.604722045 +0000 UTC m=+3167.521084411" watchObservedRunningTime="2025-11-25 20:26:25.612756823 +0000 UTC m=+3167.529119189" Nov 25 20:26:25 crc kubenswrapper[4775]: I1125 20:26:25.634130 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=2.634102122 podStartE2EDuration="2.634102122s" podCreationTimestamp="2025-11-25 20:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:26:25.62181568 +0000 UTC m=+3167.538178046" watchObservedRunningTime="2025-11-25 20:26:25.634102122 +0000 UTC m=+3167.550464498" Nov 25 20:26:25 crc kubenswrapper[4775]: I1125 20:26:25.658738 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.920571765 podStartE2EDuration="3.65871753s" podCreationTimestamp="2025-11-25 20:26:22 +0000 UTC" firstStartedPulling="2025-11-25 20:26:23.590295895 +0000 UTC m=+3165.506658261" lastFinishedPulling="2025-11-25 20:26:24.32844164 +0000 UTC m=+3166.244804026" observedRunningTime="2025-11-25 20:26:25.643948939 +0000 UTC m=+3167.560311305" watchObservedRunningTime="2025-11-25 20:26:25.65871753 +0000 UTC m=+3167.575079906" Nov 25 20:26:26 crc kubenswrapper[4775]: I1125 20:26:26.033618 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 25 20:26:27 crc kubenswrapper[4775]: I1125 20:26:27.513534 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:26:27 crc kubenswrapper[4775]: I1125 20:26:27.513908 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="proxy-httpd" containerID="cri-o://f65e957378baba34ddf170a3f9de6327209cb32b15558a883ee2c867ae603009" gracePeriod=30 Nov 25 20:26:27 crc kubenswrapper[4775]: I1125 20:26:27.514256 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="ceilometer-central-agent" containerID="cri-o://2f4cb55c1ff5c2146f1c3fc6ccbaa9ee8b5ada727a0953d06c838c1f19287ac1" gracePeriod=30 Nov 25 20:26:27 crc kubenswrapper[4775]: I1125 20:26:27.513969 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="sg-core" containerID="cri-o://b16e3e1a46a7b7e0140b968c536426a7e663a8a863acc04bb0cf33267a777fe2" gracePeriod=30 Nov 25 20:26:27 crc kubenswrapper[4775]: I1125 20:26:27.514012 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="ceilometer-notification-agent" containerID="cri-o://9e057e1ec13131d11ce237b684c14408c2dcb8728f5a5414c9f89c5e04eaba73" gracePeriod=30 Nov 25 20:26:27 crc kubenswrapper[4775]: I1125 20:26:27.613547 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" containerName="manila-api" containerID="cri-o://ace1dc6f4800ecc4c4904e81f3674c36845056eed9db0fbe80f74029fd1d8945" gracePeriod=30 Nov 25 20:26:27 crc kubenswrapper[4775]: I1125 20:26:27.613773 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" containerName="manila-api-log" containerID="cri-o://a4fa684d32d84f96f58976117b596c2070959f12e5ac0d044585e1d81490a3c9" gracePeriod=30 Nov 25 20:26:28 crc kubenswrapper[4775]: I1125 20:26:28.682658 4775 generic.go:334] "Generic (PLEG): container finished" podID="679cadba-ff1b-4691-94c6-d218f83173f0" containerID="f65e957378baba34ddf170a3f9de6327209cb32b15558a883ee2c867ae603009" exitCode=0 Nov 25 20:26:28 crc kubenswrapper[4775]: I1125 20:26:28.682693 4775 generic.go:334] "Generic (PLEG): container finished" podID="679cadba-ff1b-4691-94c6-d218f83173f0" containerID="b16e3e1a46a7b7e0140b968c536426a7e663a8a863acc04bb0cf33267a777fe2" exitCode=2 Nov 25 20:26:28 crc kubenswrapper[4775]: I1125 20:26:28.682702 4775 generic.go:334] "Generic (PLEG): container finished" podID="679cadba-ff1b-4691-94c6-d218f83173f0" containerID="2f4cb55c1ff5c2146f1c3fc6ccbaa9ee8b5ada727a0953d06c838c1f19287ac1" exitCode=0 Nov 25 20:26:28 crc kubenswrapper[4775]: I1125 20:26:28.682685 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679cadba-ff1b-4691-94c6-d218f83173f0","Type":"ContainerDied","Data":"f65e957378baba34ddf170a3f9de6327209cb32b15558a883ee2c867ae603009"} Nov 25 20:26:28 crc kubenswrapper[4775]: I1125 20:26:28.682816 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679cadba-ff1b-4691-94c6-d218f83173f0","Type":"ContainerDied","Data":"b16e3e1a46a7b7e0140b968c536426a7e663a8a863acc04bb0cf33267a777fe2"} Nov 25 20:26:28 crc kubenswrapper[4775]: I1125 20:26:28.682832 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679cadba-ff1b-4691-94c6-d218f83173f0","Type":"ContainerDied","Data":"2f4cb55c1ff5c2146f1c3fc6ccbaa9ee8b5ada727a0953d06c838c1f19287ac1"} Nov 25 20:26:28 crc kubenswrapper[4775]: I1125 20:26:28.692409 4775 generic.go:334] "Generic (PLEG): container finished" podID="2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" containerID="ace1dc6f4800ecc4c4904e81f3674c36845056eed9db0fbe80f74029fd1d8945" exitCode=0 Nov 25 20:26:28 crc kubenswrapper[4775]: I1125 20:26:28.692434 4775 generic.go:334] "Generic (PLEG): container finished" podID="2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" containerID="a4fa684d32d84f96f58976117b596c2070959f12e5ac0d044585e1d81490a3c9" exitCode=143 Nov 25 20:26:28 crc kubenswrapper[4775]: I1125 20:26:28.692450 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd","Type":"ContainerDied","Data":"ace1dc6f4800ecc4c4904e81f3674c36845056eed9db0fbe80f74029fd1d8945"} Nov 25 20:26:28 crc kubenswrapper[4775]: I1125 20:26:28.692469 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd","Type":"ContainerDied","Data":"a4fa684d32d84f96f58976117b596c2070959f12e5ac0d044585e1d81490a3c9"} Nov 25 20:26:30 crc kubenswrapper[4775]: I1125 20:26:30.844611 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.009244 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-config-data\") pod \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.009281 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-config-data-custom\") pod \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.009337 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzl88\" (UniqueName: \"kubernetes.io/projected/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-kube-api-access-mzl88\") pod \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.009440 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-etc-machine-id\") pod \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.009552 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-combined-ca-bundle\") pod \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.009580 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-logs\") pod \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.009662 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-scripts\") pod \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\" (UID: \"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd\") " Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.009761 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" (UID: "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.010337 4775 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.012046 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-logs" (OuterVolumeSpecName: "logs") pod "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" (UID: "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.014102 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" (UID: "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.016661 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-scripts" (OuterVolumeSpecName: "scripts") pod "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" (UID: "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.018594 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-kube-api-access-mzl88" (OuterVolumeSpecName: "kube-api-access-mzl88") pod "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" (UID: "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd"). InnerVolumeSpecName "kube-api-access-mzl88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.046777 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" (UID: "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.072183 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-config-data" (OuterVolumeSpecName: "config-data") pod "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" (UID: "2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.111784 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzl88\" (UniqueName: \"kubernetes.io/projected/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-kube-api-access-mzl88\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.111812 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.111822 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.111830 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.111842 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.111850 4775 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.733812 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.733813 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd","Type":"ContainerDied","Data":"fac3d7883a0f059eda2e47cb0135f5d5995c535e964873de4f181a6db149a346"} Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.734202 4775 scope.go:117] "RemoveContainer" containerID="ace1dc6f4800ecc4c4904e81f3674c36845056eed9db0fbe80f74029fd1d8945" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.737143 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"2d7ef516a811343cd10e12e15c402f06d9bd61c6c1f5ea05e97aa83edcd18925"} Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.737195 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"57aba6aaa2804283fb168870adab04de52f9965a36c106b2cd2e4dc780861f37"} Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.767248 4775 scope.go:117] "RemoveContainer" containerID="a4fa684d32d84f96f58976117b596c2070959f12e5ac0d044585e1d81490a3c9" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.770568 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.966414739 podStartE2EDuration="9.770546373s" podCreationTimestamp="2025-11-25 20:26:22 +0000 UTC" firstStartedPulling="2025-11-25 20:26:23.767296797 +0000 UTC m=+3165.683659163" lastFinishedPulling="2025-11-25 20:26:30.571428431 +0000 UTC m=+3172.487790797" observedRunningTime="2025-11-25 20:26:31.766839102 +0000 UTC m=+3173.683201468" watchObservedRunningTime="2025-11-25 20:26:31.770546373 +0000 UTC m=+3173.686908779" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.843243 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.860443 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.873846 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 25 20:26:31 crc kubenswrapper[4775]: E1125 20:26:31.874205 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" containerName="manila-api" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.874217 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" containerName="manila-api" Nov 25 20:26:31 crc kubenswrapper[4775]: E1125 20:26:31.874230 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" containerName="manila-api-log" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.874236 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" containerName="manila-api-log" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.874416 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" containerName="manila-api" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.874430 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" containerName="manila-api-log" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.875416 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.877344 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.877559 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.877732 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.895117 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.948350 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhjsn\" (UniqueName: \"kubernetes.io/projected/a18f9ccb-ee60-48c8-9fe2-5a505036b958-kube-api-access-qhjsn\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.948401 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-config-data-custom\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.948445 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.948461 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-internal-tls-certs\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.948508 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-config-data\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.948560 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-scripts\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.948587 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a18f9ccb-ee60-48c8-9fe2-5a505036b958-logs\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.948607 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a18f9ccb-ee60-48c8-9fe2-5a505036b958-etc-machine-id\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:31 crc kubenswrapper[4775]: I1125 20:26:31.948627 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-public-tls-certs\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.050252 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-config-data-custom\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.050355 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.050371 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-internal-tls-certs\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.051210 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-config-data\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.051261 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-scripts\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.051292 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a18f9ccb-ee60-48c8-9fe2-5a505036b958-logs\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.051313 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a18f9ccb-ee60-48c8-9fe2-5a505036b958-etc-machine-id\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.051336 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-public-tls-certs\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.051419 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhjsn\" (UniqueName: \"kubernetes.io/projected/a18f9ccb-ee60-48c8-9fe2-5a505036b958-kube-api-access-qhjsn\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.051880 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a18f9ccb-ee60-48c8-9fe2-5a505036b958-logs\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.052002 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a18f9ccb-ee60-48c8-9fe2-5a505036b958-etc-machine-id\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.073864 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-config-data-custom\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.074223 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-internal-tls-certs\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.074433 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.074744 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-scripts\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.079543 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhjsn\" (UniqueName: \"kubernetes.io/projected/a18f9ccb-ee60-48c8-9fe2-5a505036b958-kube-api-access-qhjsn\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.080052 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-public-tls-certs\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.081365 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18f9ccb-ee60-48c8-9fe2-5a505036b958-config-data\") pod \"manila-api-0\" (UID: \"a18f9ccb-ee60-48c8-9fe2-5a505036b958\") " pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.196193 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 20:26:32 crc kubenswrapper[4775]: I1125 20:26:32.872381 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd" path="/var/lib/kubelet/pods/2223bbbc-0aa4-41cb-9ba8-aab6ca05d6cd/volumes" Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.104997 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.112796 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.553843 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-gthzc" Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.607273 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.643106 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-66dcr"] Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.643503 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" podUID="4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" containerName="dnsmasq-dns" containerID="cri-o://816d1c8739f163dae90f7420e7d7ad349c6b6b56c05545965ebe210bd0f7cf52" gracePeriod=10 Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.758085 4775 generic.go:334] "Generic (PLEG): container finished" podID="f540d713-b2ba-459b-84b8-714fe08f05ac" containerID="8f107b52e4d16da9c983af7c5620637bad2de0e4c67b79c76e6397460bd3fd38" exitCode=137 Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.758499 4775 generic.go:334] "Generic (PLEG): container finished" podID="f540d713-b2ba-459b-84b8-714fe08f05ac" containerID="7bc94d703a40fff0c91617154a41340f91d66d4ede7f8230ae321f81227a4137" exitCode=137 Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.758274 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bf8598cd5-69z2f" event={"ID":"f540d713-b2ba-459b-84b8-714fe08f05ac","Type":"ContainerDied","Data":"8f107b52e4d16da9c983af7c5620637bad2de0e4c67b79c76e6397460bd3fd38"} Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.758571 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bf8598cd5-69z2f" event={"ID":"f540d713-b2ba-459b-84b8-714fe08f05ac","Type":"ContainerDied","Data":"7bc94d703a40fff0c91617154a41340f91d66d4ede7f8230ae321f81227a4137"} Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.760191 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"7fe426a6827ad65bea74e8dc08e0a29274c826f705258213a7698498f3317506"} Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.764749 4775 generic.go:334] "Generic (PLEG): container finished" podID="8d1939db-0c4f-45b0-9b3b-3d91590a9730" containerID="eec297159506a66bf6b0a1319810d4ec126d7a36d2a1967cd062c2b7739dbf78" exitCode=137 Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.764769 4775 generic.go:334] "Generic (PLEG): container finished" podID="8d1939db-0c4f-45b0-9b3b-3d91590a9730" containerID="9dd2d3db45f3f05ebbf876a0b624ff4bee34b929ccef26936164793eabe46afc" exitCode=137 Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.764804 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8695b6d995-cnfpw" event={"ID":"8d1939db-0c4f-45b0-9b3b-3d91590a9730","Type":"ContainerDied","Data":"eec297159506a66bf6b0a1319810d4ec126d7a36d2a1967cd062c2b7739dbf78"} Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.764827 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8695b6d995-cnfpw" event={"ID":"8d1939db-0c4f-45b0-9b3b-3d91590a9730","Type":"ContainerDied","Data":"9dd2d3db45f3f05ebbf876a0b624ff4bee34b929ccef26936164793eabe46afc"} Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.767831 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" containerID="57aba6aaa2804283fb168870adab04de52f9965a36c106b2cd2e4dc780861f37" exitCode=1 Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.767852 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerDied","Data":"57aba6aaa2804283fb168870adab04de52f9965a36c106b2cd2e4dc780861f37"} Nov 25 20:26:33 crc kubenswrapper[4775]: I1125 20:26:33.768447 4775 scope.go:117] "RemoveContainer" containerID="57aba6aaa2804283fb168870adab04de52f9965a36c106b2cd2e4dc780861f37" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.361562 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.434595 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.526827 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f540d713-b2ba-459b-84b8-714fe08f05ac-scripts\") pod \"f540d713-b2ba-459b-84b8-714fe08f05ac\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.527487 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f540d713-b2ba-459b-84b8-714fe08f05ac-logs\") pod \"f540d713-b2ba-459b-84b8-714fe08f05ac\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.527545 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpnbg\" (UniqueName: \"kubernetes.io/projected/f540d713-b2ba-459b-84b8-714fe08f05ac-kube-api-access-mpnbg\") pod \"f540d713-b2ba-459b-84b8-714fe08f05ac\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.527633 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f540d713-b2ba-459b-84b8-714fe08f05ac-horizon-secret-key\") pod \"f540d713-b2ba-459b-84b8-714fe08f05ac\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.527694 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f540d713-b2ba-459b-84b8-714fe08f05ac-config-data\") pod \"f540d713-b2ba-459b-84b8-714fe08f05ac\" (UID: \"f540d713-b2ba-459b-84b8-714fe08f05ac\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.527962 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f540d713-b2ba-459b-84b8-714fe08f05ac-logs" (OuterVolumeSpecName: "logs") pod "f540d713-b2ba-459b-84b8-714fe08f05ac" (UID: "f540d713-b2ba-459b-84b8-714fe08f05ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.528572 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f540d713-b2ba-459b-84b8-714fe08f05ac-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.534334 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f540d713-b2ba-459b-84b8-714fe08f05ac-kube-api-access-mpnbg" (OuterVolumeSpecName: "kube-api-access-mpnbg") pod "f540d713-b2ba-459b-84b8-714fe08f05ac" (UID: "f540d713-b2ba-459b-84b8-714fe08f05ac"). InnerVolumeSpecName "kube-api-access-mpnbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.534396 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f540d713-b2ba-459b-84b8-714fe08f05ac-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f540d713-b2ba-459b-84b8-714fe08f05ac" (UID: "f540d713-b2ba-459b-84b8-714fe08f05ac"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.555239 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f540d713-b2ba-459b-84b8-714fe08f05ac-scripts" (OuterVolumeSpecName: "scripts") pod "f540d713-b2ba-459b-84b8-714fe08f05ac" (UID: "f540d713-b2ba-459b-84b8-714fe08f05ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.574586 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f540d713-b2ba-459b-84b8-714fe08f05ac-config-data" (OuterVolumeSpecName: "config-data") pod "f540d713-b2ba-459b-84b8-714fe08f05ac" (UID: "f540d713-b2ba-459b-84b8-714fe08f05ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.624052 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.629458 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2rcz\" (UniqueName: \"kubernetes.io/projected/8d1939db-0c4f-45b0-9b3b-3d91590a9730-kube-api-access-n2rcz\") pod \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.630016 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d1939db-0c4f-45b0-9b3b-3d91590a9730-scripts\") pod \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.630081 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d1939db-0c4f-45b0-9b3b-3d91590a9730-horizon-secret-key\") pod \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.630126 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1939db-0c4f-45b0-9b3b-3d91590a9730-logs\") pod \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.630171 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d1939db-0c4f-45b0-9b3b-3d91590a9730-config-data\") pod \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\" (UID: \"8d1939db-0c4f-45b0-9b3b-3d91590a9730\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.630851 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d1939db-0c4f-45b0-9b3b-3d91590a9730-logs" (OuterVolumeSpecName: "logs") pod "8d1939db-0c4f-45b0-9b3b-3d91590a9730" (UID: "8d1939db-0c4f-45b0-9b3b-3d91590a9730"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.630897 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpnbg\" (UniqueName: \"kubernetes.io/projected/f540d713-b2ba-459b-84b8-714fe08f05ac-kube-api-access-mpnbg\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.630911 4775 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f540d713-b2ba-459b-84b8-714fe08f05ac-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.630924 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f540d713-b2ba-459b-84b8-714fe08f05ac-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.630936 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f540d713-b2ba-459b-84b8-714fe08f05ac-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.636748 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d1939db-0c4f-45b0-9b3b-3d91590a9730-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8d1939db-0c4f-45b0-9b3b-3d91590a9730" (UID: "8d1939db-0c4f-45b0-9b3b-3d91590a9730"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.636956 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d1939db-0c4f-45b0-9b3b-3d91590a9730-kube-api-access-n2rcz" (OuterVolumeSpecName: "kube-api-access-n2rcz") pod "8d1939db-0c4f-45b0-9b3b-3d91590a9730" (UID: "8d1939db-0c4f-45b0-9b3b-3d91590a9730"). InnerVolumeSpecName "kube-api-access-n2rcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.707869 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d1939db-0c4f-45b0-9b3b-3d91590a9730-config-data" (OuterVolumeSpecName: "config-data") pod "8d1939db-0c4f-45b0-9b3b-3d91590a9730" (UID: "8d1939db-0c4f-45b0-9b3b-3d91590a9730"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.709904 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d1939db-0c4f-45b0-9b3b-3d91590a9730-scripts" (OuterVolumeSpecName: "scripts") pod "8d1939db-0c4f-45b0-9b3b-3d91590a9730" (UID: "8d1939db-0c4f-45b0-9b3b-3d91590a9730"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.733973 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb9fc\" (UniqueName: \"kubernetes.io/projected/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-kube-api-access-gb9fc\") pod \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.734079 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-ovsdbserver-sb\") pod \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.734137 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-config\") pod \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.734188 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-dns-svc\") pod \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.734226 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-ovsdbserver-nb\") pod \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.734261 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-openstack-edpm-ipam\") pod \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.734673 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2rcz\" (UniqueName: \"kubernetes.io/projected/8d1939db-0c4f-45b0-9b3b-3d91590a9730-kube-api-access-n2rcz\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.734684 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d1939db-0c4f-45b0-9b3b-3d91590a9730-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.734693 4775 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d1939db-0c4f-45b0-9b3b-3d91590a9730-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.734702 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1939db-0c4f-45b0-9b3b-3d91590a9730-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.734710 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d1939db-0c4f-45b0-9b3b-3d91590a9730-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.747059 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-kube-api-access-gb9fc" (OuterVolumeSpecName: "kube-api-access-gb9fc") pod "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" (UID: "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43"). InnerVolumeSpecName "kube-api-access-gb9fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.784262 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-config" (OuterVolumeSpecName: "config") pod "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" (UID: "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.792530 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"b38e7a846a236a51e051e65eadb4e44381fc3db8480b47a14205dc315eb0ea91"} Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.792583 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"9359971c89917f9ff92ae42d7a467954da535ce8d37dcf8361093b9ec069af8f"} Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.793898 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.797436 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8695b6d995-cnfpw" event={"ID":"8d1939db-0c4f-45b0-9b3b-3d91590a9730","Type":"ContainerDied","Data":"b4281707f3815fb4659fda8a78479c3f9e53f32359f99d280670c24f7705ed4c"} Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.797476 4775 scope.go:117] "RemoveContainer" containerID="eec297159506a66bf6b0a1319810d4ec126d7a36d2a1967cd062c2b7739dbf78" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.798565 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8695b6d995-cnfpw" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.801684 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"ff3981b4698b883066d292592aa0e5ca6fc32f54ebb8828ebb9a2397abc3d12f"} Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.803257 4775 generic.go:334] "Generic (PLEG): container finished" podID="4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" containerID="816d1c8739f163dae90f7420e7d7ad349c6b6b56c05545965ebe210bd0f7cf52" exitCode=0 Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.803313 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" event={"ID":"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43","Type":"ContainerDied","Data":"816d1c8739f163dae90f7420e7d7ad349c6b6b56c05545965ebe210bd0f7cf52"} Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.803335 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" event={"ID":"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43","Type":"ContainerDied","Data":"afae7a127032a10081ca801b6dacb4c0ccf6dae5b059259eeacb5c167bb196ab"} Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.803394 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-66dcr" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.803419 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" (UID: "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.805169 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bf8598cd5-69z2f" event={"ID":"f540d713-b2ba-459b-84b8-714fe08f05ac","Type":"ContainerDied","Data":"74d5b3ab567bb083d26d25071ebcee1fa16bb0596475254df5c01d623cdc935c"} Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.805392 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bf8598cd5-69z2f" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.823701 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.823674784 podStartE2EDuration="3.823674784s" podCreationTimestamp="2025-11-25 20:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:26:34.810495446 +0000 UTC m=+3176.726857812" watchObservedRunningTime="2025-11-25 20:26:34.823674784 +0000 UTC m=+3176.740037150" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.839061 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" (UID: "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.855703 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-openstack-edpm-ipam\") pod \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\" (UID: \"4739fd50-c4a9-4bbb-ab6f-eb67564b2f43\") " Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.856099 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:26:34 crc kubenswrapper[4775]: W1125 20:26:34.856559 4775 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43/volumes/kubernetes.io~configmap/openstack-edpm-ipam Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.856640 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" (UID: "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: E1125 20:26:34.857553 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.863194 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.863222 4775 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.863237 4775 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.863248 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb9fc\" (UniqueName: \"kubernetes.io/projected/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-kube-api-access-gb9fc\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.865345 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" (UID: "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.868383 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" (UID: "4739fd50-c4a9-4bbb-ab6f-eb67564b2f43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.916449 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6bf8598cd5-69z2f"] Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.923505 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6bf8598cd5-69z2f"] Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.931411 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8695b6d995-cnfpw"] Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.937728 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8695b6d995-cnfpw"] Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.970888 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.971316 4775 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:34 crc kubenswrapper[4775]: I1125 20:26:34.986526 4775 scope.go:117] "RemoveContainer" containerID="9dd2d3db45f3f05ebbf876a0b624ff4bee34b929ccef26936164793eabe46afc" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.005201 4775 scope.go:117] "RemoveContainer" containerID="816d1c8739f163dae90f7420e7d7ad349c6b6b56c05545965ebe210bd0f7cf52" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.021203 4775 scope.go:117] "RemoveContainer" containerID="eca5f509e9efd8d43c5bcb7e7fb48284cbb105956c9b0179bced62ff143e5838" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.068578 4775 scope.go:117] "RemoveContainer" containerID="816d1c8739f163dae90f7420e7d7ad349c6b6b56c05545965ebe210bd0f7cf52" Nov 25 20:26:35 crc kubenswrapper[4775]: E1125 20:26:35.069159 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"816d1c8739f163dae90f7420e7d7ad349c6b6b56c05545965ebe210bd0f7cf52\": container with ID starting with 816d1c8739f163dae90f7420e7d7ad349c6b6b56c05545965ebe210bd0f7cf52 not found: ID does not exist" containerID="816d1c8739f163dae90f7420e7d7ad349c6b6b56c05545965ebe210bd0f7cf52" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.069205 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"816d1c8739f163dae90f7420e7d7ad349c6b6b56c05545965ebe210bd0f7cf52"} err="failed to get container status \"816d1c8739f163dae90f7420e7d7ad349c6b6b56c05545965ebe210bd0f7cf52\": rpc error: code = NotFound desc = could not find container \"816d1c8739f163dae90f7420e7d7ad349c6b6b56c05545965ebe210bd0f7cf52\": container with ID starting with 816d1c8739f163dae90f7420e7d7ad349c6b6b56c05545965ebe210bd0f7cf52 not found: ID does not exist" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.069234 4775 scope.go:117] "RemoveContainer" containerID="eca5f509e9efd8d43c5bcb7e7fb48284cbb105956c9b0179bced62ff143e5838" Nov 25 20:26:35 crc kubenswrapper[4775]: E1125 20:26:35.069612 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca5f509e9efd8d43c5bcb7e7fb48284cbb105956c9b0179bced62ff143e5838\": container with ID starting with eca5f509e9efd8d43c5bcb7e7fb48284cbb105956c9b0179bced62ff143e5838 not found: ID does not exist" containerID="eca5f509e9efd8d43c5bcb7e7fb48284cbb105956c9b0179bced62ff143e5838" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.069673 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca5f509e9efd8d43c5bcb7e7fb48284cbb105956c9b0179bced62ff143e5838"} err="failed to get container status \"eca5f509e9efd8d43c5bcb7e7fb48284cbb105956c9b0179bced62ff143e5838\": rpc error: code = NotFound desc = could not find container \"eca5f509e9efd8d43c5bcb7e7fb48284cbb105956c9b0179bced62ff143e5838\": container with ID starting with eca5f509e9efd8d43c5bcb7e7fb48284cbb105956c9b0179bced62ff143e5838 not found: ID does not exist" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.069700 4775 scope.go:117] "RemoveContainer" containerID="8f107b52e4d16da9c983af7c5620637bad2de0e4c67b79c76e6397460bd3fd38" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.175245 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.185753 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-66dcr"] Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.197155 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-66dcr"] Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.201330 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.314707 4775 scope.go:117] "RemoveContainer" containerID="7bc94d703a40fff0c91617154a41340f91d66d4ede7f8230ae321f81227a4137" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.769515 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.784374 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-scripts\") pod \"679cadba-ff1b-4691-94c6-d218f83173f0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.784423 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-combined-ca-bundle\") pod \"679cadba-ff1b-4691-94c6-d218f83173f0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.784447 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-sg-core-conf-yaml\") pod \"679cadba-ff1b-4691-94c6-d218f83173f0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.784524 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh8f8\" (UniqueName: \"kubernetes.io/projected/679cadba-ff1b-4691-94c6-d218f83173f0-kube-api-access-jh8f8\") pod \"679cadba-ff1b-4691-94c6-d218f83173f0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.784610 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-ceilometer-tls-certs\") pod \"679cadba-ff1b-4691-94c6-d218f83173f0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.784661 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679cadba-ff1b-4691-94c6-d218f83173f0-log-httpd\") pod \"679cadba-ff1b-4691-94c6-d218f83173f0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.784754 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-config-data\") pod \"679cadba-ff1b-4691-94c6-d218f83173f0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.784898 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679cadba-ff1b-4691-94c6-d218f83173f0-run-httpd\") pod \"679cadba-ff1b-4691-94c6-d218f83173f0\" (UID: \"679cadba-ff1b-4691-94c6-d218f83173f0\") " Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.785978 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/679cadba-ff1b-4691-94c6-d218f83173f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "679cadba-ff1b-4691-94c6-d218f83173f0" (UID: "679cadba-ff1b-4691-94c6-d218f83173f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.791101 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/679cadba-ff1b-4691-94c6-d218f83173f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "679cadba-ff1b-4691-94c6-d218f83173f0" (UID: "679cadba-ff1b-4691-94c6-d218f83173f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.795887 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/679cadba-ff1b-4691-94c6-d218f83173f0-kube-api-access-jh8f8" (OuterVolumeSpecName: "kube-api-access-jh8f8") pod "679cadba-ff1b-4691-94c6-d218f83173f0" (UID: "679cadba-ff1b-4691-94c6-d218f83173f0"). InnerVolumeSpecName "kube-api-access-jh8f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.795884 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-scripts" (OuterVolumeSpecName: "scripts") pod "679cadba-ff1b-4691-94c6-d218f83173f0" (UID: "679cadba-ff1b-4691-94c6-d218f83173f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.837852 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "679cadba-ff1b-4691-94c6-d218f83173f0" (UID: "679cadba-ff1b-4691-94c6-d218f83173f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.875671 4775 generic.go:334] "Generic (PLEG): container finished" podID="679cadba-ff1b-4691-94c6-d218f83173f0" containerID="9e057e1ec13131d11ce237b684c14408c2dcb8728f5a5414c9f89c5e04eaba73" exitCode=0 Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.875779 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679cadba-ff1b-4691-94c6-d218f83173f0","Type":"ContainerDied","Data":"9e057e1ec13131d11ce237b684c14408c2dcb8728f5a5414c9f89c5e04eaba73"} Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.878854 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.879316 4775 scope.go:117] "RemoveContainer" containerID="f65e957378baba34ddf170a3f9de6327209cb32b15558a883ee2c867ae603009" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.875813 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679cadba-ff1b-4691-94c6-d218f83173f0","Type":"ContainerDied","Data":"9f994a22961c3d5a9e9ca3e355e677a3807a8b8e115dc45adacc83cff510437d"} Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.885858 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "679cadba-ff1b-4691-94c6-d218f83173f0" (UID: "679cadba-ff1b-4691-94c6-d218f83173f0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.888449 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh8f8\" (UniqueName: \"kubernetes.io/projected/679cadba-ff1b-4691-94c6-d218f83173f0-kube-api-access-jh8f8\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.888478 4775 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.888489 4775 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679cadba-ff1b-4691-94c6-d218f83173f0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.888514 4775 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679cadba-ff1b-4691-94c6-d218f83173f0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.888523 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.888534 4775 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.963825 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "679cadba-ff1b-4691-94c6-d218f83173f0" (UID: "679cadba-ff1b-4691-94c6-d218f83173f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.971808 4775 scope.go:117] "RemoveContainer" containerID="b16e3e1a46a7b7e0140b968c536426a7e663a8a863acc04bb0cf33267a777fe2" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.992195 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-config-data" (OuterVolumeSpecName: "config-data") pod "679cadba-ff1b-4691-94c6-d218f83173f0" (UID: "679cadba-ff1b-4691-94c6-d218f83173f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.993489 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:35 crc kubenswrapper[4775]: I1125 20:26:35.993513 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679cadba-ff1b-4691-94c6-d218f83173f0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.040828 4775 scope.go:117] "RemoveContainer" containerID="9e057e1ec13131d11ce237b684c14408c2dcb8728f5a5414c9f89c5e04eaba73" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.115271 4775 scope.go:117] "RemoveContainer" containerID="2f4cb55c1ff5c2146f1c3fc6ccbaa9ee8b5ada727a0953d06c838c1f19287ac1" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.137862 4775 scope.go:117] "RemoveContainer" containerID="f65e957378baba34ddf170a3f9de6327209cb32b15558a883ee2c867ae603009" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.139903 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f65e957378baba34ddf170a3f9de6327209cb32b15558a883ee2c867ae603009\": container with ID starting with f65e957378baba34ddf170a3f9de6327209cb32b15558a883ee2c867ae603009 not found: ID does not exist" containerID="f65e957378baba34ddf170a3f9de6327209cb32b15558a883ee2c867ae603009" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.139943 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65e957378baba34ddf170a3f9de6327209cb32b15558a883ee2c867ae603009"} err="failed to get container status \"f65e957378baba34ddf170a3f9de6327209cb32b15558a883ee2c867ae603009\": rpc error: code = NotFound desc = could not find container \"f65e957378baba34ddf170a3f9de6327209cb32b15558a883ee2c867ae603009\": container with ID starting with f65e957378baba34ddf170a3f9de6327209cb32b15558a883ee2c867ae603009 not found: ID does not exist" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.139967 4775 scope.go:117] "RemoveContainer" containerID="b16e3e1a46a7b7e0140b968c536426a7e663a8a863acc04bb0cf33267a777fe2" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.145775 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b16e3e1a46a7b7e0140b968c536426a7e663a8a863acc04bb0cf33267a777fe2\": container with ID starting with b16e3e1a46a7b7e0140b968c536426a7e663a8a863acc04bb0cf33267a777fe2 not found: ID does not exist" containerID="b16e3e1a46a7b7e0140b968c536426a7e663a8a863acc04bb0cf33267a777fe2" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.145816 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b16e3e1a46a7b7e0140b968c536426a7e663a8a863acc04bb0cf33267a777fe2"} err="failed to get container status \"b16e3e1a46a7b7e0140b968c536426a7e663a8a863acc04bb0cf33267a777fe2\": rpc error: code = NotFound desc = could not find container \"b16e3e1a46a7b7e0140b968c536426a7e663a8a863acc04bb0cf33267a777fe2\": container with ID starting with b16e3e1a46a7b7e0140b968c536426a7e663a8a863acc04bb0cf33267a777fe2 not found: ID does not exist" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.145839 4775 scope.go:117] "RemoveContainer" containerID="9e057e1ec13131d11ce237b684c14408c2dcb8728f5a5414c9f89c5e04eaba73" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.149528 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e057e1ec13131d11ce237b684c14408c2dcb8728f5a5414c9f89c5e04eaba73\": container with ID starting with 9e057e1ec13131d11ce237b684c14408c2dcb8728f5a5414c9f89c5e04eaba73 not found: ID does not exist" containerID="9e057e1ec13131d11ce237b684c14408c2dcb8728f5a5414c9f89c5e04eaba73" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.149559 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e057e1ec13131d11ce237b684c14408c2dcb8728f5a5414c9f89c5e04eaba73"} err="failed to get container status \"9e057e1ec13131d11ce237b684c14408c2dcb8728f5a5414c9f89c5e04eaba73\": rpc error: code = NotFound desc = could not find container \"9e057e1ec13131d11ce237b684c14408c2dcb8728f5a5414c9f89c5e04eaba73\": container with ID starting with 9e057e1ec13131d11ce237b684c14408c2dcb8728f5a5414c9f89c5e04eaba73 not found: ID does not exist" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.149578 4775 scope.go:117] "RemoveContainer" containerID="2f4cb55c1ff5c2146f1c3fc6ccbaa9ee8b5ada727a0953d06c838c1f19287ac1" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.150086 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f4cb55c1ff5c2146f1c3fc6ccbaa9ee8b5ada727a0953d06c838c1f19287ac1\": container with ID starting with 2f4cb55c1ff5c2146f1c3fc6ccbaa9ee8b5ada727a0953d06c838c1f19287ac1 not found: ID does not exist" containerID="2f4cb55c1ff5c2146f1c3fc6ccbaa9ee8b5ada727a0953d06c838c1f19287ac1" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.150109 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f4cb55c1ff5c2146f1c3fc6ccbaa9ee8b5ada727a0953d06c838c1f19287ac1"} err="failed to get container status \"2f4cb55c1ff5c2146f1c3fc6ccbaa9ee8b5ada727a0953d06c838c1f19287ac1\": rpc error: code = NotFound desc = could not find container \"2f4cb55c1ff5c2146f1c3fc6ccbaa9ee8b5ada727a0953d06c838c1f19287ac1\": container with ID starting with 2f4cb55c1ff5c2146f1c3fc6ccbaa9ee8b5ada727a0953d06c838c1f19287ac1 not found: ID does not exist" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.207808 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.219087 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.240716 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.241065 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" containerName="dnsmasq-dns" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241086 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" containerName="dnsmasq-dns" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.241103 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="proxy-httpd" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241112 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="proxy-httpd" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.241123 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f540d713-b2ba-459b-84b8-714fe08f05ac" containerName="horizon-log" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241133 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f540d713-b2ba-459b-84b8-714fe08f05ac" containerName="horizon-log" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.241149 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1939db-0c4f-45b0-9b3b-3d91590a9730" containerName="horizon" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241157 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1939db-0c4f-45b0-9b3b-3d91590a9730" containerName="horizon" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.241170 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1939db-0c4f-45b0-9b3b-3d91590a9730" containerName="horizon-log" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241177 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1939db-0c4f-45b0-9b3b-3d91590a9730" containerName="horizon-log" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.241194 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f540d713-b2ba-459b-84b8-714fe08f05ac" containerName="horizon" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241202 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f540d713-b2ba-459b-84b8-714fe08f05ac" containerName="horizon" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.241221 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="ceilometer-central-agent" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241229 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="ceilometer-central-agent" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.241246 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="sg-core" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241253 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="sg-core" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.241267 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" containerName="init" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241273 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" containerName="init" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.241283 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="ceilometer-notification-agent" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241290 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="ceilometer-notification-agent" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241475 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f540d713-b2ba-459b-84b8-714fe08f05ac" containerName="horizon" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241489 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f540d713-b2ba-459b-84b8-714fe08f05ac" containerName="horizon-log" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241499 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1939db-0c4f-45b0-9b3b-3d91590a9730" containerName="horizon-log" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241508 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="ceilometer-central-agent" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241518 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="proxy-httpd" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241530 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="sg-core" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241543 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" containerName="dnsmasq-dns" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241554 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1939db-0c4f-45b0-9b3b-3d91590a9730" containerName="horizon" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.241564 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" containerName="ceilometer-notification-agent" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.243268 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.245910 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.246216 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.246393 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.280669 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.399079 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.399139 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-scripts\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.399170 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fca2dd17-71ad-456d-a753-0e59d5d37d81-run-httpd\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.399312 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-config-data\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.399389 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvxlx\" (UniqueName: \"kubernetes.io/projected/fca2dd17-71ad-456d-a753-0e59d5d37d81-kube-api-access-xvxlx\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.399539 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fca2dd17-71ad-456d-a753-0e59d5d37d81-log-httpd\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.399757 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.399908 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.501727 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fca2dd17-71ad-456d-a753-0e59d5d37d81-log-httpd\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.501812 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.501850 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.501901 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.501932 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-scripts\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.501959 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fca2dd17-71ad-456d-a753-0e59d5d37d81-run-httpd\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.501984 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-config-data\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.502193 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvxlx\" (UniqueName: \"kubernetes.io/projected/fca2dd17-71ad-456d-a753-0e59d5d37d81-kube-api-access-xvxlx\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.502494 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fca2dd17-71ad-456d-a753-0e59d5d37d81-log-httpd\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.502755 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fca2dd17-71ad-456d-a753-0e59d5d37d81-run-httpd\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.505805 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-scripts\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.505957 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.510310 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.511263 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-config-data\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.521065 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.524358 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvxlx\" (UniqueName: \"kubernetes.io/projected/fca2dd17-71ad-456d-a753-0e59d5d37d81-kube-api-access-xvxlx\") pod \"ceilometer-0\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.564850 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.856788 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4739fd50-c4a9-4bbb-ab6f-eb67564b2f43" path="/var/lib/kubelet/pods/4739fd50-c4a9-4bbb-ab6f-eb67564b2f43/volumes" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.857753 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="679cadba-ff1b-4691-94c6-d218f83173f0" path="/var/lib/kubelet/pods/679cadba-ff1b-4691-94c6-d218f83173f0/volumes" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.858965 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d1939db-0c4f-45b0-9b3b-3d91590a9730" path="/var/lib/kubelet/pods/8d1939db-0c4f-45b0-9b3b-3d91590a9730/volumes" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.859586 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f540d713-b2ba-459b-84b8-714fe08f05ac" path="/var/lib/kubelet/pods/f540d713-b2ba-459b-84b8-714fe08f05ac/volumes" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.949585 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" containerID="ff3981b4698b883066d292592aa0e5ca6fc32f54ebb8828ebb9a2397abc3d12f" exitCode=1 Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.949688 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerDied","Data":"ff3981b4698b883066d292592aa0e5ca6fc32f54ebb8828ebb9a2397abc3d12f"} Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.949784 4775 scope.go:117] "RemoveContainer" containerID="57aba6aaa2804283fb168870adab04de52f9965a36c106b2cd2e4dc780861f37" Nov 25 20:26:36 crc kubenswrapper[4775]: I1125 20:26:36.950750 4775 scope.go:117] "RemoveContainer" containerID="ff3981b4698b883066d292592aa0e5ca6fc32f54ebb8828ebb9a2397abc3d12f" Nov 25 20:26:36 crc kubenswrapper[4775]: E1125 20:26:36.951165 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:26:37 crc kubenswrapper[4775]: I1125 20:26:37.023259 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:26:37 crc kubenswrapper[4775]: W1125 20:26:37.026960 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfca2dd17_71ad_456d_a753_0e59d5d37d81.slice/crio-ade540bcf99802676e955b559dd7a80f2fed0f441259d357a67fe1dde2718950 WatchSource:0}: Error finding container ade540bcf99802676e955b559dd7a80f2fed0f441259d357a67fe1dde2718950: Status 404 returned error can't find the container with id ade540bcf99802676e955b559dd7a80f2fed0f441259d357a67fe1dde2718950 Nov 25 20:26:37 crc kubenswrapper[4775]: I1125 20:26:37.144119 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:26:37 crc kubenswrapper[4775]: I1125 20:26:37.181126 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:26:37 crc kubenswrapper[4775]: I1125 20:26:37.321216 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-77ddd59696-rlw9m" Nov 25 20:26:37 crc kubenswrapper[4775]: I1125 20:26:37.390444 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7679659b64-d62zj"] Nov 25 20:26:37 crc kubenswrapper[4775]: I1125 20:26:37.963913 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fca2dd17-71ad-456d-a753-0e59d5d37d81","Type":"ContainerStarted","Data":"c216d1f49a7dc1c25fb9b3cbc6eacd20805f588e4d2247a708a4e6955d2bf58a"} Nov 25 20:26:37 crc kubenswrapper[4775]: I1125 20:26:37.964372 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fca2dd17-71ad-456d-a753-0e59d5d37d81","Type":"ContainerStarted","Data":"ade540bcf99802676e955b559dd7a80f2fed0f441259d357a67fe1dde2718950"} Nov 25 20:26:37 crc kubenswrapper[4775]: I1125 20:26:37.964253 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7679659b64-d62zj" podUID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerName="horizon" containerID="cri-o://b9d708d07b2ec49229a301a9014ac54030d8fcb46dfc5a8aad31f38099fe035a" gracePeriod=30 Nov 25 20:26:37 crc kubenswrapper[4775]: I1125 20:26:37.964085 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7679659b64-d62zj" podUID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerName="horizon-log" containerID="cri-o://a7430b9a332a13b841b717601654c786053188721ad1669e68dba41054cda09a" gracePeriod=30 Nov 25 20:26:38 crc kubenswrapper[4775]: I1125 20:26:38.974294 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fca2dd17-71ad-456d-a753-0e59d5d37d81","Type":"ContainerStarted","Data":"10d348e4f28ed6b7fc06248c24deed879e1761e29ac5a4df15897b94cab370e1"} Nov 25 20:26:38 crc kubenswrapper[4775]: I1125 20:26:38.974827 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fca2dd17-71ad-456d-a753-0e59d5d37d81","Type":"ContainerStarted","Data":"67b5ff70a84262fc3799e9f96f5dd95f3f46b5973facd9de041d9fb029c5c573"} Nov 25 20:26:41 crc kubenswrapper[4775]: I1125 20:26:41.014479 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fca2dd17-71ad-456d-a753-0e59d5d37d81","Type":"ContainerStarted","Data":"5c7f1aac41d6f0fc84e5f378fe24a153a2e02c970b343eb23c5455a7b4059d70"} Nov 25 20:26:41 crc kubenswrapper[4775]: I1125 20:26:41.014813 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="ceilometer-central-agent" containerID="cri-o://c216d1f49a7dc1c25fb9b3cbc6eacd20805f588e4d2247a708a4e6955d2bf58a" gracePeriod=30 Nov 25 20:26:41 crc kubenswrapper[4775]: I1125 20:26:41.015120 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 20:26:41 crc kubenswrapper[4775]: I1125 20:26:41.015159 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="proxy-httpd" containerID="cri-o://5c7f1aac41d6f0fc84e5f378fe24a153a2e02c970b343eb23c5455a7b4059d70" gracePeriod=30 Nov 25 20:26:41 crc kubenswrapper[4775]: I1125 20:26:41.015255 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="sg-core" containerID="cri-o://10d348e4f28ed6b7fc06248c24deed879e1761e29ac5a4df15897b94cab370e1" gracePeriod=30 Nov 25 20:26:41 crc kubenswrapper[4775]: I1125 20:26:41.015302 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="ceilometer-notification-agent" containerID="cri-o://67b5ff70a84262fc3799e9f96f5dd95f3f46b5973facd9de041d9fb029c5c573" gracePeriod=30 Nov 25 20:26:41 crc kubenswrapper[4775]: I1125 20:26:41.053612 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.003747616 podStartE2EDuration="5.05357913s" podCreationTimestamp="2025-11-25 20:26:36 +0000 UTC" firstStartedPulling="2025-11-25 20:26:37.029467377 +0000 UTC m=+3178.945829743" lastFinishedPulling="2025-11-25 20:26:40.079298891 +0000 UTC m=+3181.995661257" observedRunningTime="2025-11-25 20:26:41.030452092 +0000 UTC m=+3182.946814468" watchObservedRunningTime="2025-11-25 20:26:41.05357913 +0000 UTC m=+3182.969941496" Nov 25 20:26:41 crc kubenswrapper[4775]: I1125 20:26:41.104240 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7679659b64-d62zj" podUID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.236:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:57372->10.217.0.236:8443: read: connection reset by peer" Nov 25 20:26:42 crc kubenswrapper[4775]: I1125 20:26:42.028906 4775 generic.go:334] "Generic (PLEG): container finished" podID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerID="5c7f1aac41d6f0fc84e5f378fe24a153a2e02c970b343eb23c5455a7b4059d70" exitCode=0 Nov 25 20:26:42 crc kubenswrapper[4775]: I1125 20:26:42.029393 4775 generic.go:334] "Generic (PLEG): container finished" podID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerID="10d348e4f28ed6b7fc06248c24deed879e1761e29ac5a4df15897b94cab370e1" exitCode=2 Nov 25 20:26:42 crc kubenswrapper[4775]: I1125 20:26:42.029422 4775 generic.go:334] "Generic (PLEG): container finished" podID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerID="67b5ff70a84262fc3799e9f96f5dd95f3f46b5973facd9de041d9fb029c5c573" exitCode=0 Nov 25 20:26:42 crc kubenswrapper[4775]: I1125 20:26:42.029058 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fca2dd17-71ad-456d-a753-0e59d5d37d81","Type":"ContainerDied","Data":"5c7f1aac41d6f0fc84e5f378fe24a153a2e02c970b343eb23c5455a7b4059d70"} Nov 25 20:26:42 crc kubenswrapper[4775]: I1125 20:26:42.029553 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fca2dd17-71ad-456d-a753-0e59d5d37d81","Type":"ContainerDied","Data":"10d348e4f28ed6b7fc06248c24deed879e1761e29ac5a4df15897b94cab370e1"} Nov 25 20:26:42 crc kubenswrapper[4775]: I1125 20:26:42.029585 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fca2dd17-71ad-456d-a753-0e59d5d37d81","Type":"ContainerDied","Data":"67b5ff70a84262fc3799e9f96f5dd95f3f46b5973facd9de041d9fb029c5c573"} Nov 25 20:26:42 crc kubenswrapper[4775]: I1125 20:26:42.032507 4775 generic.go:334] "Generic (PLEG): container finished" podID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerID="b9d708d07b2ec49229a301a9014ac54030d8fcb46dfc5a8aad31f38099fe035a" exitCode=0 Nov 25 20:26:42 crc kubenswrapper[4775]: I1125 20:26:42.032554 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7679659b64-d62zj" event={"ID":"e5501671-0373-42f3-b08b-2b0c4c6049fa","Type":"ContainerDied","Data":"b9d708d07b2ec49229a301a9014ac54030d8fcb46dfc5a8aad31f38099fe035a"} Nov 25 20:26:43 crc kubenswrapper[4775]: I1125 20:26:43.105314 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:26:43 crc kubenswrapper[4775]: I1125 20:26:43.105757 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:26:43 crc kubenswrapper[4775]: I1125 20:26:43.105780 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:26:43 crc kubenswrapper[4775]: I1125 20:26:43.108153 4775 scope.go:117] "RemoveContainer" containerID="ff3981b4698b883066d292592aa0e5ca6fc32f54ebb8828ebb9a2397abc3d12f" Nov 25 20:26:43 crc kubenswrapper[4775]: E1125 20:26:43.108814 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.065723 4775 generic.go:334] "Generic (PLEG): container finished" podID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerID="c216d1f49a7dc1c25fb9b3cbc6eacd20805f588e4d2247a708a4e6955d2bf58a" exitCode=0 Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.065776 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fca2dd17-71ad-456d-a753-0e59d5d37d81","Type":"ContainerDied","Data":"c216d1f49a7dc1c25fb9b3cbc6eacd20805f588e4d2247a708a4e6955d2bf58a"} Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.065840 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fca2dd17-71ad-456d-a753-0e59d5d37d81","Type":"ContainerDied","Data":"ade540bcf99802676e955b559dd7a80f2fed0f441259d357a67fe1dde2718950"} Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.065852 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ade540bcf99802676e955b559dd7a80f2fed0f441259d357a67fe1dde2718950" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.076673 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.273826 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fca2dd17-71ad-456d-a753-0e59d5d37d81-run-httpd\") pod \"fca2dd17-71ad-456d-a753-0e59d5d37d81\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.275380 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-scripts\") pod \"fca2dd17-71ad-456d-a753-0e59d5d37d81\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.276602 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-config-data\") pod \"fca2dd17-71ad-456d-a753-0e59d5d37d81\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.276815 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvxlx\" (UniqueName: \"kubernetes.io/projected/fca2dd17-71ad-456d-a753-0e59d5d37d81-kube-api-access-xvxlx\") pod \"fca2dd17-71ad-456d-a753-0e59d5d37d81\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.275270 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca2dd17-71ad-456d-a753-0e59d5d37d81-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fca2dd17-71ad-456d-a753-0e59d5d37d81" (UID: "fca2dd17-71ad-456d-a753-0e59d5d37d81"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.277183 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-ceilometer-tls-certs\") pod \"fca2dd17-71ad-456d-a753-0e59d5d37d81\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.277544 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-combined-ca-bundle\") pod \"fca2dd17-71ad-456d-a753-0e59d5d37d81\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.277717 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fca2dd17-71ad-456d-a753-0e59d5d37d81-log-httpd\") pod \"fca2dd17-71ad-456d-a753-0e59d5d37d81\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.277881 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-sg-core-conf-yaml\") pod \"fca2dd17-71ad-456d-a753-0e59d5d37d81\" (UID: \"fca2dd17-71ad-456d-a753-0e59d5d37d81\") " Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.278498 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca2dd17-71ad-456d-a753-0e59d5d37d81-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fca2dd17-71ad-456d-a753-0e59d5d37d81" (UID: "fca2dd17-71ad-456d-a753-0e59d5d37d81"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.278903 4775 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fca2dd17-71ad-456d-a753-0e59d5d37d81-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.279012 4775 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fca2dd17-71ad-456d-a753-0e59d5d37d81-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.283053 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca2dd17-71ad-456d-a753-0e59d5d37d81-kube-api-access-xvxlx" (OuterVolumeSpecName: "kube-api-access-xvxlx") pod "fca2dd17-71ad-456d-a753-0e59d5d37d81" (UID: "fca2dd17-71ad-456d-a753-0e59d5d37d81"). InnerVolumeSpecName "kube-api-access-xvxlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.287560 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-scripts" (OuterVolumeSpecName: "scripts") pod "fca2dd17-71ad-456d-a753-0e59d5d37d81" (UID: "fca2dd17-71ad-456d-a753-0e59d5d37d81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.311817 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fca2dd17-71ad-456d-a753-0e59d5d37d81" (UID: "fca2dd17-71ad-456d-a753-0e59d5d37d81"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.361165 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fca2dd17-71ad-456d-a753-0e59d5d37d81" (UID: "fca2dd17-71ad-456d-a753-0e59d5d37d81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.362958 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fca2dd17-71ad-456d-a753-0e59d5d37d81" (UID: "fca2dd17-71ad-456d-a753-0e59d5d37d81"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.378087 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-config-data" (OuterVolumeSpecName: "config-data") pod "fca2dd17-71ad-456d-a753-0e59d5d37d81" (UID: "fca2dd17-71ad-456d-a753-0e59d5d37d81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.380522 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.380660 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.380749 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvxlx\" (UniqueName: \"kubernetes.io/projected/fca2dd17-71ad-456d-a753-0e59d5d37d81-kube-api-access-xvxlx\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.380840 4775 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.380927 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.381013 4775 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fca2dd17-71ad-456d-a753-0e59d5d37d81-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.683634 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 25 20:26:44 crc kubenswrapper[4775]: I1125 20:26:44.766258 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.075834 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.076579 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="cbd9df5e-6aad-4113-8e66-c831af3b7c5f" containerName="manila-scheduler" containerID="cri-o://14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a" gracePeriod=30 Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.076734 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="cbd9df5e-6aad-4113-8e66-c831af3b7c5f" containerName="probe" containerID="cri-o://f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55" gracePeriod=30 Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.120483 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.133586 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.144851 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:26:45 crc kubenswrapper[4775]: E1125 20:26:45.146114 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="sg-core" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.146151 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="sg-core" Nov 25 20:26:45 crc kubenswrapper[4775]: E1125 20:26:45.146182 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="ceilometer-central-agent" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.146199 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="ceilometer-central-agent" Nov 25 20:26:45 crc kubenswrapper[4775]: E1125 20:26:45.146290 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="ceilometer-notification-agent" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.146307 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="ceilometer-notification-agent" Nov 25 20:26:45 crc kubenswrapper[4775]: E1125 20:26:45.146328 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="proxy-httpd" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.146340 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="proxy-httpd" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.146682 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="proxy-httpd" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.146720 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="ceilometer-central-agent" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.146743 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="ceilometer-notification-agent" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.146808 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" containerName="sg-core" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.149024 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.153033 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.153289 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.153622 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.154757 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.301059 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-run-httpd\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.301273 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5p96\" (UniqueName: \"kubernetes.io/projected/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-kube-api-access-z5p96\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.301402 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-config-data\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.301567 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-log-httpd\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.301622 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.301953 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.302062 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-scripts\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.302161 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.405074 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.405176 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-scripts\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.405246 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.405286 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-run-httpd\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.405355 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5p96\" (UniqueName: \"kubernetes.io/projected/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-kube-api-access-z5p96\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.405421 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-config-data\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.405505 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-log-httpd\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.405544 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.406244 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-log-httpd\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.406468 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-run-httpd\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.412370 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.412554 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.413413 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-scripts\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.414059 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-config-data\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.416294 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.427944 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5p96\" (UniqueName: \"kubernetes.io/projected/aba372ee-dd64-4cc1-a19d-d7f5e0bd0713-kube-api-access-z5p96\") pod \"ceilometer-0\" (UID: \"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713\") " pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.481140 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:26:45 crc kubenswrapper[4775]: I1125 20:26:45.961747 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.052308 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.085550 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713","Type":"ContainerStarted","Data":"2e15e163c4167091baee0b94cf762c188839a250fd7082ccd53d1e7f3196f725"} Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.098082 4775 generic.go:334] "Generic (PLEG): container finished" podID="cbd9df5e-6aad-4113-8e66-c831af3b7c5f" containerID="f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55" exitCode=0 Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.098300 4775 generic.go:334] "Generic (PLEG): container finished" podID="cbd9df5e-6aad-4113-8e66-c831af3b7c5f" containerID="14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a" exitCode=0 Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.098177 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"cbd9df5e-6aad-4113-8e66-c831af3b7c5f","Type":"ContainerDied","Data":"f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55"} Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.098474 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"cbd9df5e-6aad-4113-8e66-c831af3b7c5f","Type":"ContainerDied","Data":"14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a"} Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.098544 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"cbd9df5e-6aad-4113-8e66-c831af3b7c5f","Type":"ContainerDied","Data":"d70b6579bf220fd3fabfd4f8e729d2c49be9cb71cef80037556eb116ff48f12f"} Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.098639 4775 scope.go:117] "RemoveContainer" containerID="f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.098156 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.122941 4775 scope.go:117] "RemoveContainer" containerID="14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.143369 4775 scope.go:117] "RemoveContainer" containerID="f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55" Nov 25 20:26:46 crc kubenswrapper[4775]: E1125 20:26:46.144050 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55\": container with ID starting with f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55 not found: ID does not exist" containerID="f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.144084 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55"} err="failed to get container status \"f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55\": rpc error: code = NotFound desc = could not find container \"f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55\": container with ID starting with f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55 not found: ID does not exist" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.144104 4775 scope.go:117] "RemoveContainer" containerID="14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a" Nov 25 20:26:46 crc kubenswrapper[4775]: E1125 20:26:46.144335 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a\": container with ID starting with 14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a not found: ID does not exist" containerID="14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.144409 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a"} err="failed to get container status \"14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a\": rpc error: code = NotFound desc = could not find container \"14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a\": container with ID starting with 14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a not found: ID does not exist" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.144541 4775 scope.go:117] "RemoveContainer" containerID="f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.144867 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55"} err="failed to get container status \"f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55\": rpc error: code = NotFound desc = could not find container \"f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55\": container with ID starting with f47f3ad9c1ecde19c14b750434fe36aa8d090e8d3e438e6e6b5927130a8a7f55 not found: ID does not exist" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.144910 4775 scope.go:117] "RemoveContainer" containerID="14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.145230 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a"} err="failed to get container status \"14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a\": rpc error: code = NotFound desc = could not find container \"14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a\": container with ID starting with 14ddb9a9cb89feaf25086104368c914605cb726139dbb975a1bc78959fbf1b7a not found: ID does not exist" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.230965 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvdzt\" (UniqueName: \"kubernetes.io/projected/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-kube-api-access-jvdzt\") pod \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.231103 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-config-data\") pod \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.231146 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-config-data-custom\") pod \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.231177 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-etc-machine-id\") pod \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.231253 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-combined-ca-bundle\") pod \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.231299 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-scripts\") pod \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\" (UID: \"cbd9df5e-6aad-4113-8e66-c831af3b7c5f\") " Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.231708 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cbd9df5e-6aad-4113-8e66-c831af3b7c5f" (UID: "cbd9df5e-6aad-4113-8e66-c831af3b7c5f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.236040 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cbd9df5e-6aad-4113-8e66-c831af3b7c5f" (UID: "cbd9df5e-6aad-4113-8e66-c831af3b7c5f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.244739 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-kube-api-access-jvdzt" (OuterVolumeSpecName: "kube-api-access-jvdzt") pod "cbd9df5e-6aad-4113-8e66-c831af3b7c5f" (UID: "cbd9df5e-6aad-4113-8e66-c831af3b7c5f"). InnerVolumeSpecName "kube-api-access-jvdzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.248904 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-scripts" (OuterVolumeSpecName: "scripts") pod "cbd9df5e-6aad-4113-8e66-c831af3b7c5f" (UID: "cbd9df5e-6aad-4113-8e66-c831af3b7c5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.290418 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbd9df5e-6aad-4113-8e66-c831af3b7c5f" (UID: "cbd9df5e-6aad-4113-8e66-c831af3b7c5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.333601 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.333634 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.333689 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvdzt\" (UniqueName: \"kubernetes.io/projected/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-kube-api-access-jvdzt\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.333704 4775 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.333716 4775 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.338781 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-config-data" (OuterVolumeSpecName: "config-data") pod "cbd9df5e-6aad-4113-8e66-c831af3b7c5f" (UID: "cbd9df5e-6aad-4113-8e66-c831af3b7c5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.434519 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.435158 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd9df5e-6aad-4113-8e66-c831af3b7c5f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.450272 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.472906 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 20:26:46 crc kubenswrapper[4775]: E1125 20:26:46.473494 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd9df5e-6aad-4113-8e66-c831af3b7c5f" containerName="manila-scheduler" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.473517 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd9df5e-6aad-4113-8e66-c831af3b7c5f" containerName="manila-scheduler" Nov 25 20:26:46 crc kubenswrapper[4775]: E1125 20:26:46.473565 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd9df5e-6aad-4113-8e66-c831af3b7c5f" containerName="probe" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.473577 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd9df5e-6aad-4113-8e66-c831af3b7c5f" containerName="probe" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.473980 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd9df5e-6aad-4113-8e66-c831af3b7c5f" containerName="probe" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.474005 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd9df5e-6aad-4113-8e66-c831af3b7c5f" containerName="manila-scheduler" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.475639 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.479281 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.481798 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.639458 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.639528 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.639557 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.639576 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-config-data\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.640154 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-scripts\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.640369 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn2ww\" (UniqueName: \"kubernetes.io/projected/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-kube-api-access-bn2ww\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.742951 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-scripts\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.743074 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn2ww\" (UniqueName: \"kubernetes.io/projected/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-kube-api-access-bn2ww\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.743146 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.743212 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.743256 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.743296 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-config-data\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.745388 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.747596 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-scripts\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.747948 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.749612 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-config-data\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.751292 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.766212 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn2ww\" (UniqueName: \"kubernetes.io/projected/5a2bec54-2f45-4aee-a3bf-774f63c4b64e-kube-api-access-bn2ww\") pod \"manila-scheduler-0\" (UID: \"5a2bec54-2f45-4aee-a3bf-774f63c4b64e\") " pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.804490 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.866300 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd9df5e-6aad-4113-8e66-c831af3b7c5f" path="/var/lib/kubelet/pods/cbd9df5e-6aad-4113-8e66-c831af3b7c5f/volumes" Nov 25 20:26:46 crc kubenswrapper[4775]: I1125 20:26:46.867136 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca2dd17-71ad-456d-a753-0e59d5d37d81" path="/var/lib/kubelet/pods/fca2dd17-71ad-456d-a753-0e59d5d37d81/volumes" Nov 25 20:26:47 crc kubenswrapper[4775]: I1125 20:26:47.113751 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713","Type":"ContainerStarted","Data":"eafee40cfeeac29f113763b2c123fda08469ea3e64366c039f5d19a7d7881d59"} Nov 25 20:26:47 crc kubenswrapper[4775]: I1125 20:26:47.276972 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 20:26:48 crc kubenswrapper[4775]: I1125 20:26:48.125641 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5a2bec54-2f45-4aee-a3bf-774f63c4b64e","Type":"ContainerStarted","Data":"092aabb48db0f59b4af077566f1f515f54fe025a2aec757e3cd234f170bd72b4"} Nov 25 20:26:48 crc kubenswrapper[4775]: I1125 20:26:48.125960 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5a2bec54-2f45-4aee-a3bf-774f63c4b64e","Type":"ContainerStarted","Data":"c2b9db9380e8ce0ae6c597f5897f51df6ff1b9dcd8d5025628b0b95b804f6f97"} Nov 25 20:26:48 crc kubenswrapper[4775]: I1125 20:26:48.128479 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713","Type":"ContainerStarted","Data":"68a8727a165958762c2a9c6ba978d855bdd1a82001cbe21aad9838ee24441df7"} Nov 25 20:26:48 crc kubenswrapper[4775]: I1125 20:26:48.128513 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713","Type":"ContainerStarted","Data":"8a9f0dfa92d89d9c002c2f1eeee2cec81f744e816e46c688fa3e587e1e66519f"} Nov 25 20:26:48 crc kubenswrapper[4775]: I1125 20:26:48.859969 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:26:48 crc kubenswrapper[4775]: E1125 20:26:48.860458 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:26:49 crc kubenswrapper[4775]: I1125 20:26:49.139415 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5a2bec54-2f45-4aee-a3bf-774f63c4b64e","Type":"ContainerStarted","Data":"3090c380f456e4fb539892a9636d3104ce98b935b8e79fed1305aaf036abb4ea"} Nov 25 20:26:49 crc kubenswrapper[4775]: I1125 20:26:49.162645 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.1626234970000002 podStartE2EDuration="3.162623497s" podCreationTimestamp="2025-11-25 20:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:26:49.16055528 +0000 UTC m=+3191.076917656" watchObservedRunningTime="2025-11-25 20:26:49.162623497 +0000 UTC m=+3191.078985873" Nov 25 20:26:50 crc kubenswrapper[4775]: I1125 20:26:50.156370 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aba372ee-dd64-4cc1-a19d-d7f5e0bd0713","Type":"ContainerStarted","Data":"ce4ff96b4dd0c29985abcc556c9e0f1830847e5577d87a1b5cc10ec62633687e"} Nov 25 20:26:50 crc kubenswrapper[4775]: I1125 20:26:50.156923 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 20:26:50 crc kubenswrapper[4775]: I1125 20:26:50.196205 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7595596009999999 podStartE2EDuration="5.196179225s" podCreationTimestamp="2025-11-25 20:26:45 +0000 UTC" firstStartedPulling="2025-11-25 20:26:45.980184023 +0000 UTC m=+3187.896546389" lastFinishedPulling="2025-11-25 20:26:49.416803627 +0000 UTC m=+3191.333166013" observedRunningTime="2025-11-25 20:26:50.179256266 +0000 UTC m=+3192.095618672" watchObservedRunningTime="2025-11-25 20:26:50.196179225 +0000 UTC m=+3192.112541601" Nov 25 20:26:51 crc kubenswrapper[4775]: I1125 20:26:51.015115 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7679659b64-d62zj" podUID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.236:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.236:8443: connect: connection refused" Nov 25 20:26:53 crc kubenswrapper[4775]: I1125 20:26:53.193261 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:26:53 crc kubenswrapper[4775]: I1125 20:26:53.231426 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:26:56 crc kubenswrapper[4775]: I1125 20:26:56.805185 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 25 20:26:56 crc kubenswrapper[4775]: I1125 20:26:56.848929 4775 scope.go:117] "RemoveContainer" containerID="ff3981b4698b883066d292592aa0e5ca6fc32f54ebb8828ebb9a2397abc3d12f" Nov 25 20:26:58 crc kubenswrapper[4775]: I1125 20:26:58.262120 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"619b53eefa05f40991c9db877e0cf17a5d7d9578cee13b16ac864f805bc4b936"} Nov 25 20:26:59 crc kubenswrapper[4775]: I1125 20:26:59.273616 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" containerID="619b53eefa05f40991c9db877e0cf17a5d7d9578cee13b16ac864f805bc4b936" exitCode=1 Nov 25 20:26:59 crc kubenswrapper[4775]: I1125 20:26:59.273723 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerDied","Data":"619b53eefa05f40991c9db877e0cf17a5d7d9578cee13b16ac864f805bc4b936"} Nov 25 20:26:59 crc kubenswrapper[4775]: I1125 20:26:59.273981 4775 scope.go:117] "RemoveContainer" containerID="ff3981b4698b883066d292592aa0e5ca6fc32f54ebb8828ebb9a2397abc3d12f" Nov 25 20:26:59 crc kubenswrapper[4775]: I1125 20:26:59.274939 4775 scope.go:117] "RemoveContainer" containerID="619b53eefa05f40991c9db877e0cf17a5d7d9578cee13b16ac864f805bc4b936" Nov 25 20:26:59 crc kubenswrapper[4775]: E1125 20:26:59.275417 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:27:01 crc kubenswrapper[4775]: I1125 20:27:01.015595 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7679659b64-d62zj" podUID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.236:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.236:8443: connect: connection refused" Nov 25 20:27:01 crc kubenswrapper[4775]: I1125 20:27:01.016443 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:27:01 crc kubenswrapper[4775]: I1125 20:27:01.847342 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:27:01 crc kubenswrapper[4775]: E1125 20:27:01.847857 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:27:03 crc kubenswrapper[4775]: I1125 20:27:03.104613 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:27:03 crc kubenswrapper[4775]: I1125 20:27:03.105177 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:27:03 crc kubenswrapper[4775]: I1125 20:27:03.105216 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:27:03 crc kubenswrapper[4775]: I1125 20:27:03.106411 4775 scope.go:117] "RemoveContainer" containerID="619b53eefa05f40991c9db877e0cf17a5d7d9578cee13b16ac864f805bc4b936" Nov 25 20:27:03 crc kubenswrapper[4775]: E1125 20:27:03.107050 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:27:03 crc kubenswrapper[4775]: I1125 20:27:03.173088 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:27:03 crc kubenswrapper[4775]: I1125 20:27:03.206383 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.305290 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.370756 4775 generic.go:334] "Generic (PLEG): container finished" podID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerID="a7430b9a332a13b841b717601654c786053188721ad1669e68dba41054cda09a" exitCode=137 Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.370796 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7679659b64-d62zj" event={"ID":"e5501671-0373-42f3-b08b-2b0c4c6049fa","Type":"ContainerDied","Data":"a7430b9a332a13b841b717601654c786053188721ad1669e68dba41054cda09a"} Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.867461 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.984927 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-horizon-tls-certs\") pod \"e5501671-0373-42f3-b08b-2b0c4c6049fa\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.985104 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82754\" (UniqueName: \"kubernetes.io/projected/e5501671-0373-42f3-b08b-2b0c4c6049fa-kube-api-access-82754\") pod \"e5501671-0373-42f3-b08b-2b0c4c6049fa\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.985180 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5501671-0373-42f3-b08b-2b0c4c6049fa-scripts\") pod \"e5501671-0373-42f3-b08b-2b0c4c6049fa\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.985249 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-combined-ca-bundle\") pod \"e5501671-0373-42f3-b08b-2b0c4c6049fa\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.985270 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5501671-0373-42f3-b08b-2b0c4c6049fa-logs\") pod \"e5501671-0373-42f3-b08b-2b0c4c6049fa\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.985289 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-horizon-secret-key\") pod \"e5501671-0373-42f3-b08b-2b0c4c6049fa\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.985329 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5501671-0373-42f3-b08b-2b0c4c6049fa-config-data\") pod \"e5501671-0373-42f3-b08b-2b0c4c6049fa\" (UID: \"e5501671-0373-42f3-b08b-2b0c4c6049fa\") " Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.986746 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5501671-0373-42f3-b08b-2b0c4c6049fa-logs" (OuterVolumeSpecName: "logs") pod "e5501671-0373-42f3-b08b-2b0c4c6049fa" (UID: "e5501671-0373-42f3-b08b-2b0c4c6049fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.990240 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5501671-0373-42f3-b08b-2b0c4c6049fa-kube-api-access-82754" (OuterVolumeSpecName: "kube-api-access-82754") pod "e5501671-0373-42f3-b08b-2b0c4c6049fa" (UID: "e5501671-0373-42f3-b08b-2b0c4c6049fa"). InnerVolumeSpecName "kube-api-access-82754". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:08 crc kubenswrapper[4775]: I1125 20:27:08.991855 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e5501671-0373-42f3-b08b-2b0c4c6049fa" (UID: "e5501671-0373-42f3-b08b-2b0c4c6049fa"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.008364 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5501671-0373-42f3-b08b-2b0c4c6049fa-scripts" (OuterVolumeSpecName: "scripts") pod "e5501671-0373-42f3-b08b-2b0c4c6049fa" (UID: "e5501671-0373-42f3-b08b-2b0c4c6049fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.010809 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5501671-0373-42f3-b08b-2b0c4c6049fa-config-data" (OuterVolumeSpecName: "config-data") pod "e5501671-0373-42f3-b08b-2b0c4c6049fa" (UID: "e5501671-0373-42f3-b08b-2b0c4c6049fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.016679 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5501671-0373-42f3-b08b-2b0c4c6049fa" (UID: "e5501671-0373-42f3-b08b-2b0c4c6049fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.042282 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "e5501671-0373-42f3-b08b-2b0c4c6049fa" (UID: "e5501671-0373-42f3-b08b-2b0c4c6049fa"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.087676 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.087706 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5501671-0373-42f3-b08b-2b0c4c6049fa-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.087715 4775 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.087723 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5501671-0373-42f3-b08b-2b0c4c6049fa-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.087731 4775 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5501671-0373-42f3-b08b-2b0c4c6049fa-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.087739 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82754\" (UniqueName: \"kubernetes.io/projected/e5501671-0373-42f3-b08b-2b0c4c6049fa-kube-api-access-82754\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.087748 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5501671-0373-42f3-b08b-2b0c4c6049fa-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.383761 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7679659b64-d62zj" event={"ID":"e5501671-0373-42f3-b08b-2b0c4c6049fa","Type":"ContainerDied","Data":"d1bd08058a0bfd28cc3a65e0b1d4b8a3638642a58d4f077cafae856d3425a56f"} Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.384173 4775 scope.go:117] "RemoveContainer" containerID="b9d708d07b2ec49229a301a9014ac54030d8fcb46dfc5a8aad31f38099fe035a" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.383850 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7679659b64-d62zj" Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.437954 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7679659b64-d62zj"] Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.459249 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7679659b64-d62zj"] Nov 25 20:27:09 crc kubenswrapper[4775]: I1125 20:27:09.600397 4775 scope.go:117] "RemoveContainer" containerID="a7430b9a332a13b841b717601654c786053188721ad1669e68dba41054cda09a" Nov 25 20:27:10 crc kubenswrapper[4775]: I1125 20:27:10.864056 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5501671-0373-42f3-b08b-2b0c4c6049fa" path="/var/lib/kubelet/pods/e5501671-0373-42f3-b08b-2b0c4c6049fa/volumes" Nov 25 20:27:12 crc kubenswrapper[4775]: I1125 20:27:12.210674 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:27:12 crc kubenswrapper[4775]: I1125 20:27:12.210839 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:27:12 crc kubenswrapper[4775]: I1125 20:27:12.210970 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:27:12 crc kubenswrapper[4775]: I1125 20:27:12.212174 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"b38e7a846a236a51e051e65eadb4e44381fc3db8480b47a14205dc315eb0ea91"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:27:12 crc kubenswrapper[4775]: I1125 20:27:12.212248 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://b38e7a846a236a51e051e65eadb4e44381fc3db8480b47a14205dc315eb0ea91" gracePeriod=30 Nov 25 20:27:12 crc kubenswrapper[4775]: I1125 20:27:12.216721 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:27:15 crc kubenswrapper[4775]: I1125 20:27:15.484687 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="b38e7a846a236a51e051e65eadb4e44381fc3db8480b47a14205dc315eb0ea91" exitCode=0 Nov 25 20:27:15 crc kubenswrapper[4775]: I1125 20:27:15.484707 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"b38e7a846a236a51e051e65eadb4e44381fc3db8480b47a14205dc315eb0ea91"} Nov 25 20:27:15 crc kubenswrapper[4775]: I1125 20:27:15.494602 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 20:27:15 crc kubenswrapper[4775]: I1125 20:27:15.846984 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:27:15 crc kubenswrapper[4775]: E1125 20:27:15.847534 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:27:16 crc kubenswrapper[4775]: I1125 20:27:16.499105 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"088817aeb7cfcbe4e4d79e8223e322f2dd8509e1901ac1b6f753c2854b85194e"} Nov 25 20:27:16 crc kubenswrapper[4775]: I1125 20:27:16.500479 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:27:16 crc kubenswrapper[4775]: I1125 20:27:16.847342 4775 scope.go:117] "RemoveContainer" containerID="619b53eefa05f40991c9db877e0cf17a5d7d9578cee13b16ac864f805bc4b936" Nov 25 20:27:16 crc kubenswrapper[4775]: E1125 20:27:16.848073 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:27:26 crc kubenswrapper[4775]: I1125 20:27:26.847512 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:27:26 crc kubenswrapper[4775]: E1125 20:27:26.848271 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:27:28 crc kubenswrapper[4775]: I1125 20:27:28.854340 4775 scope.go:117] "RemoveContainer" containerID="619b53eefa05f40991c9db877e0cf17a5d7d9578cee13b16ac864f805bc4b936" Nov 25 20:27:29 crc kubenswrapper[4775]: I1125 20:27:29.648946 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"12fe23d6c1cd6728bd58c1dc3e65c46ac2ffd1e717f2426dbb3da30dc7c97d35"} Nov 25 20:27:31 crc kubenswrapper[4775]: I1125 20:27:31.690876 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" containerID="12fe23d6c1cd6728bd58c1dc3e65c46ac2ffd1e717f2426dbb3da30dc7c97d35" exitCode=1 Nov 25 20:27:31 crc kubenswrapper[4775]: I1125 20:27:31.690930 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerDied","Data":"12fe23d6c1cd6728bd58c1dc3e65c46ac2ffd1e717f2426dbb3da30dc7c97d35"} Nov 25 20:27:31 crc kubenswrapper[4775]: I1125 20:27:31.691547 4775 scope.go:117] "RemoveContainer" containerID="619b53eefa05f40991c9db877e0cf17a5d7d9578cee13b16ac864f805bc4b936" Nov 25 20:27:31 crc kubenswrapper[4775]: I1125 20:27:31.692383 4775 scope.go:117] "RemoveContainer" containerID="12fe23d6c1cd6728bd58c1dc3e65c46ac2ffd1e717f2426dbb3da30dc7c97d35" Nov 25 20:27:31 crc kubenswrapper[4775]: E1125 20:27:31.692697 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:27:33 crc kubenswrapper[4775]: I1125 20:27:33.104530 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:27:33 crc kubenswrapper[4775]: I1125 20:27:33.105054 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:27:33 crc kubenswrapper[4775]: I1125 20:27:33.106161 4775 scope.go:117] "RemoveContainer" containerID="12fe23d6c1cd6728bd58c1dc3e65c46ac2ffd1e717f2426dbb3da30dc7c97d35" Nov 25 20:27:33 crc kubenswrapper[4775]: E1125 20:27:33.106847 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:27:33 crc kubenswrapper[4775]: I1125 20:27:33.192934 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:27:33 crc kubenswrapper[4775]: I1125 20:27:33.232479 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:27:41 crc kubenswrapper[4775]: I1125 20:27:41.846899 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:27:42 crc kubenswrapper[4775]: I1125 20:27:42.826638 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"ec11d1f0a3f0d4a7c90c4c38fcc425cb1bce57f664d19903b36c7ccbbe002886"} Nov 25 20:27:43 crc kubenswrapper[4775]: I1125 20:27:43.104372 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:27:43 crc kubenswrapper[4775]: I1125 20:27:43.105423 4775 scope.go:117] "RemoveContainer" containerID="12fe23d6c1cd6728bd58c1dc3e65c46ac2ffd1e717f2426dbb3da30dc7c97d35" Nov 25 20:27:43 crc kubenswrapper[4775]: E1125 20:27:43.105678 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:27:43 crc kubenswrapper[4775]: I1125 20:27:43.183545 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:27:43 crc kubenswrapper[4775]: I1125 20:27:43.199444 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:27:52 crc kubenswrapper[4775]: I1125 20:27:52.207645 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:27:52 crc kubenswrapper[4775]: I1125 20:27:52.210801 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:27:52 crc kubenswrapper[4775]: I1125 20:27:52.210915 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:27:52 crc kubenswrapper[4775]: I1125 20:27:52.212349 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"088817aeb7cfcbe4e4d79e8223e322f2dd8509e1901ac1b6f753c2854b85194e"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:27:52 crc kubenswrapper[4775]: I1125 20:27:52.212424 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://088817aeb7cfcbe4e4d79e8223e322f2dd8509e1901ac1b6f753c2854b85194e" gracePeriod=30 Nov 25 20:27:52 crc kubenswrapper[4775]: I1125 20:27:52.221442 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="Get \"https://10.217.0.245:8786/healthcheck\": EOF" Nov 25 20:27:54 crc kubenswrapper[4775]: I1125 20:27:54.847892 4775 scope.go:117] "RemoveContainer" containerID="12fe23d6c1cd6728bd58c1dc3e65c46ac2ffd1e717f2426dbb3da30dc7c97d35" Nov 25 20:27:54 crc kubenswrapper[4775]: E1125 20:27:54.848629 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:27:55 crc kubenswrapper[4775]: I1125 20:27:55.995264 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="088817aeb7cfcbe4e4d79e8223e322f2dd8509e1901ac1b6f753c2854b85194e" exitCode=0 Nov 25 20:27:55 crc kubenswrapper[4775]: I1125 20:27:55.995531 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"088817aeb7cfcbe4e4d79e8223e322f2dd8509e1901ac1b6f753c2854b85194e"} Nov 25 20:27:55 crc kubenswrapper[4775]: I1125 20:27:55.995608 4775 scope.go:117] "RemoveContainer" containerID="b38e7a846a236a51e051e65eadb4e44381fc3db8480b47a14205dc315eb0ea91" Nov 25 20:27:57 crc kubenswrapper[4775]: I1125 20:27:57.013920 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"84f752c473a775a41f4257c51ad391db81cc3f86a1d166e3c5e8dac508d7c890"} Nov 25 20:27:57 crc kubenswrapper[4775]: I1125 20:27:57.014380 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:28:07 crc kubenswrapper[4775]: I1125 20:28:07.848317 4775 scope.go:117] "RemoveContainer" containerID="12fe23d6c1cd6728bd58c1dc3e65c46ac2ffd1e717f2426dbb3da30dc7c97d35" Nov 25 20:28:07 crc kubenswrapper[4775]: E1125 20:28:07.849547 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:28:13 crc kubenswrapper[4775]: I1125 20:28:13.196313 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:28:13 crc kubenswrapper[4775]: I1125 20:28:13.209577 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.562960 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zw6f6"] Nov 25 20:28:14 crc kubenswrapper[4775]: E1125 20:28:14.563854 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerName="horizon" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.563875 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerName="horizon" Nov 25 20:28:14 crc kubenswrapper[4775]: E1125 20:28:14.563900 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerName="horizon-log" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.563908 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerName="horizon-log" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.564238 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerName="horizon" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.564279 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5501671-0373-42f3-b08b-2b0c4c6049fa" containerName="horizon-log" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.566350 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.580623 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zw6f6"] Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.635236 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57bb9a79-c85a-459a-8407-a4318c6520dd-utilities\") pod \"redhat-marketplace-zw6f6\" (UID: \"57bb9a79-c85a-459a-8407-a4318c6520dd\") " pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.635302 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xds86\" (UniqueName: \"kubernetes.io/projected/57bb9a79-c85a-459a-8407-a4318c6520dd-kube-api-access-xds86\") pod \"redhat-marketplace-zw6f6\" (UID: \"57bb9a79-c85a-459a-8407-a4318c6520dd\") " pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.635380 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57bb9a79-c85a-459a-8407-a4318c6520dd-catalog-content\") pod \"redhat-marketplace-zw6f6\" (UID: \"57bb9a79-c85a-459a-8407-a4318c6520dd\") " pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.737185 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xds86\" (UniqueName: \"kubernetes.io/projected/57bb9a79-c85a-459a-8407-a4318c6520dd-kube-api-access-xds86\") pod \"redhat-marketplace-zw6f6\" (UID: \"57bb9a79-c85a-459a-8407-a4318c6520dd\") " pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.737292 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57bb9a79-c85a-459a-8407-a4318c6520dd-catalog-content\") pod \"redhat-marketplace-zw6f6\" (UID: \"57bb9a79-c85a-459a-8407-a4318c6520dd\") " pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.737397 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57bb9a79-c85a-459a-8407-a4318c6520dd-utilities\") pod \"redhat-marketplace-zw6f6\" (UID: \"57bb9a79-c85a-459a-8407-a4318c6520dd\") " pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.737915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57bb9a79-c85a-459a-8407-a4318c6520dd-utilities\") pod \"redhat-marketplace-zw6f6\" (UID: \"57bb9a79-c85a-459a-8407-a4318c6520dd\") " pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.738161 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57bb9a79-c85a-459a-8407-a4318c6520dd-catalog-content\") pod \"redhat-marketplace-zw6f6\" (UID: \"57bb9a79-c85a-459a-8407-a4318c6520dd\") " pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.762170 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xds86\" (UniqueName: \"kubernetes.io/projected/57bb9a79-c85a-459a-8407-a4318c6520dd-kube-api-access-xds86\") pod \"redhat-marketplace-zw6f6\" (UID: \"57bb9a79-c85a-459a-8407-a4318c6520dd\") " pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:14 crc kubenswrapper[4775]: I1125 20:28:14.891431 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:15 crc kubenswrapper[4775]: I1125 20:28:15.357421 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zw6f6"] Nov 25 20:28:16 crc kubenswrapper[4775]: I1125 20:28:16.254371 4775 generic.go:334] "Generic (PLEG): container finished" podID="57bb9a79-c85a-459a-8407-a4318c6520dd" containerID="69d9421e593f7e5599a1abb1f6dc4f49b04f71c29d635fbbf7b17ef6588bf8df" exitCode=0 Nov 25 20:28:16 crc kubenswrapper[4775]: I1125 20:28:16.254463 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zw6f6" event={"ID":"57bb9a79-c85a-459a-8407-a4318c6520dd","Type":"ContainerDied","Data":"69d9421e593f7e5599a1abb1f6dc4f49b04f71c29d635fbbf7b17ef6588bf8df"} Nov 25 20:28:16 crc kubenswrapper[4775]: I1125 20:28:16.254681 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zw6f6" event={"ID":"57bb9a79-c85a-459a-8407-a4318c6520dd","Type":"ContainerStarted","Data":"885e2d18dc578a00d9b5a47cc576bba8e5d8155952d6b32cb96fddd98e587172"} Nov 25 20:28:16 crc kubenswrapper[4775]: I1125 20:28:16.259543 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:28:16 crc kubenswrapper[4775]: I1125 20:28:16.783816 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mnnsw"] Nov 25 20:28:16 crc kubenswrapper[4775]: I1125 20:28:16.787492 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:16 crc kubenswrapper[4775]: I1125 20:28:16.815705 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mnnsw"] Nov 25 20:28:16 crc kubenswrapper[4775]: I1125 20:28:16.986183 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvvwr\" (UniqueName: \"kubernetes.io/projected/f51b9ade-ab6c-45cd-86db-617278d3b57d-kube-api-access-jvvwr\") pod \"redhat-operators-mnnsw\" (UID: \"f51b9ade-ab6c-45cd-86db-617278d3b57d\") " pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:16 crc kubenswrapper[4775]: I1125 20:28:16.986239 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51b9ade-ab6c-45cd-86db-617278d3b57d-utilities\") pod \"redhat-operators-mnnsw\" (UID: \"f51b9ade-ab6c-45cd-86db-617278d3b57d\") " pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:16 crc kubenswrapper[4775]: I1125 20:28:16.986333 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51b9ade-ab6c-45cd-86db-617278d3b57d-catalog-content\") pod \"redhat-operators-mnnsw\" (UID: \"f51b9ade-ab6c-45cd-86db-617278d3b57d\") " pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.088222 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51b9ade-ab6c-45cd-86db-617278d3b57d-catalog-content\") pod \"redhat-operators-mnnsw\" (UID: \"f51b9ade-ab6c-45cd-86db-617278d3b57d\") " pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.088554 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvvwr\" (UniqueName: \"kubernetes.io/projected/f51b9ade-ab6c-45cd-86db-617278d3b57d-kube-api-access-jvvwr\") pod \"redhat-operators-mnnsw\" (UID: \"f51b9ade-ab6c-45cd-86db-617278d3b57d\") " pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.088594 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51b9ade-ab6c-45cd-86db-617278d3b57d-utilities\") pod \"redhat-operators-mnnsw\" (UID: \"f51b9ade-ab6c-45cd-86db-617278d3b57d\") " pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.088796 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51b9ade-ab6c-45cd-86db-617278d3b57d-catalog-content\") pod \"redhat-operators-mnnsw\" (UID: \"f51b9ade-ab6c-45cd-86db-617278d3b57d\") " pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.088959 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51b9ade-ab6c-45cd-86db-617278d3b57d-utilities\") pod \"redhat-operators-mnnsw\" (UID: \"f51b9ade-ab6c-45cd-86db-617278d3b57d\") " pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.113348 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvvwr\" (UniqueName: \"kubernetes.io/projected/f51b9ade-ab6c-45cd-86db-617278d3b57d-kube-api-access-jvvwr\") pod \"redhat-operators-mnnsw\" (UID: \"f51b9ade-ab6c-45cd-86db-617278d3b57d\") " pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.128672 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.270877 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zw6f6" event={"ID":"57bb9a79-c85a-459a-8407-a4318c6520dd","Type":"ContainerStarted","Data":"ba44c7d54e7f270323fc3362c395be88facaf7c19a94052ab20eec883a66754e"} Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.584829 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mnnsw"] Nov 25 20:28:17 crc kubenswrapper[4775]: W1125 20:28:17.585341 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf51b9ade_ab6c_45cd_86db_617278d3b57d.slice/crio-889478d715c0338176deb161c1fb271639d9e155ec489f120b6682d977f43475 WatchSource:0}: Error finding container 889478d715c0338176deb161c1fb271639d9e155ec489f120b6682d977f43475: Status 404 returned error can't find the container with id 889478d715c0338176deb161c1fb271639d9e155ec489f120b6682d977f43475 Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.754096 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6w9h7"] Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.756358 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.776035 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6w9h7"] Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.908813 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/457add6c-3383-4639-ba29-ce0ac248cad9-catalog-content\") pod \"community-operators-6w9h7\" (UID: \"457add6c-3383-4639-ba29-ce0ac248cad9\") " pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.908971 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgf4w\" (UniqueName: \"kubernetes.io/projected/457add6c-3383-4639-ba29-ce0ac248cad9-kube-api-access-wgf4w\") pod \"community-operators-6w9h7\" (UID: \"457add6c-3383-4639-ba29-ce0ac248cad9\") " pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:17 crc kubenswrapper[4775]: I1125 20:28:17.909014 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/457add6c-3383-4639-ba29-ce0ac248cad9-utilities\") pod \"community-operators-6w9h7\" (UID: \"457add6c-3383-4639-ba29-ce0ac248cad9\") " pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.010500 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgf4w\" (UniqueName: \"kubernetes.io/projected/457add6c-3383-4639-ba29-ce0ac248cad9-kube-api-access-wgf4w\") pod \"community-operators-6w9h7\" (UID: \"457add6c-3383-4639-ba29-ce0ac248cad9\") " pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.010558 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/457add6c-3383-4639-ba29-ce0ac248cad9-utilities\") pod \"community-operators-6w9h7\" (UID: \"457add6c-3383-4639-ba29-ce0ac248cad9\") " pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.010642 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/457add6c-3383-4639-ba29-ce0ac248cad9-catalog-content\") pod \"community-operators-6w9h7\" (UID: \"457add6c-3383-4639-ba29-ce0ac248cad9\") " pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.012437 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/457add6c-3383-4639-ba29-ce0ac248cad9-utilities\") pod \"community-operators-6w9h7\" (UID: \"457add6c-3383-4639-ba29-ce0ac248cad9\") " pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.012490 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/457add6c-3383-4639-ba29-ce0ac248cad9-catalog-content\") pod \"community-operators-6w9h7\" (UID: \"457add6c-3383-4639-ba29-ce0ac248cad9\") " pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.034824 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgf4w\" (UniqueName: \"kubernetes.io/projected/457add6c-3383-4639-ba29-ce0ac248cad9-kube-api-access-wgf4w\") pod \"community-operators-6w9h7\" (UID: \"457add6c-3383-4639-ba29-ce0ac248cad9\") " pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.157818 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.282912 4775 generic.go:334] "Generic (PLEG): container finished" podID="57bb9a79-c85a-459a-8407-a4318c6520dd" containerID="ba44c7d54e7f270323fc3362c395be88facaf7c19a94052ab20eec883a66754e" exitCode=0 Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.282978 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zw6f6" event={"ID":"57bb9a79-c85a-459a-8407-a4318c6520dd","Type":"ContainerDied","Data":"ba44c7d54e7f270323fc3362c395be88facaf7c19a94052ab20eec883a66754e"} Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.285184 4775 generic.go:334] "Generic (PLEG): container finished" podID="f51b9ade-ab6c-45cd-86db-617278d3b57d" containerID="464cc793419ee6b9e6d916119ec4042ea8fec003a7fa86f670ab8c725645b4d3" exitCode=0 Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.285220 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnnsw" event={"ID":"f51b9ade-ab6c-45cd-86db-617278d3b57d","Type":"ContainerDied","Data":"464cc793419ee6b9e6d916119ec4042ea8fec003a7fa86f670ab8c725645b4d3"} Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.285275 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnnsw" event={"ID":"f51b9ade-ab6c-45cd-86db-617278d3b57d","Type":"ContainerStarted","Data":"889478d715c0338176deb161c1fb271639d9e155ec489f120b6682d977f43475"} Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.731774 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6w9h7"] Nov 25 20:28:18 crc kubenswrapper[4775]: W1125 20:28:18.736049 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod457add6c_3383_4639_ba29_ce0ac248cad9.slice/crio-c819a04977ff40f33e56706ffb6e6b22fef3c643069a9b7dfec2f98ad0038b69 WatchSource:0}: Error finding container c819a04977ff40f33e56706ffb6e6b22fef3c643069a9b7dfec2f98ad0038b69: Status 404 returned error can't find the container with id c819a04977ff40f33e56706ffb6e6b22fef3c643069a9b7dfec2f98ad0038b69 Nov 25 20:28:18 crc kubenswrapper[4775]: I1125 20:28:18.855985 4775 scope.go:117] "RemoveContainer" containerID="12fe23d6c1cd6728bd58c1dc3e65c46ac2ffd1e717f2426dbb3da30dc7c97d35" Nov 25 20:28:19 crc kubenswrapper[4775]: I1125 20:28:19.296799 4775 generic.go:334] "Generic (PLEG): container finished" podID="457add6c-3383-4639-ba29-ce0ac248cad9" containerID="d0e0a2ad4e7ffe7513b3c5d5e42686406286d26c425b1a596b269a3c40a2541b" exitCode=0 Nov 25 20:28:19 crc kubenswrapper[4775]: I1125 20:28:19.296848 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w9h7" event={"ID":"457add6c-3383-4639-ba29-ce0ac248cad9","Type":"ContainerDied","Data":"d0e0a2ad4e7ffe7513b3c5d5e42686406286d26c425b1a596b269a3c40a2541b"} Nov 25 20:28:19 crc kubenswrapper[4775]: I1125 20:28:19.297168 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w9h7" event={"ID":"457add6c-3383-4639-ba29-ce0ac248cad9","Type":"ContainerStarted","Data":"c819a04977ff40f33e56706ffb6e6b22fef3c643069a9b7dfec2f98ad0038b69"} Nov 25 20:28:19 crc kubenswrapper[4775]: I1125 20:28:19.298704 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnnsw" event={"ID":"f51b9ade-ab6c-45cd-86db-617278d3b57d","Type":"ContainerStarted","Data":"e30302739d244eba72304cffaf628dbfc21ec1c87b151f88ced7c38b572d3d22"} Nov 25 20:28:19 crc kubenswrapper[4775]: I1125 20:28:19.305026 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zw6f6" event={"ID":"57bb9a79-c85a-459a-8407-a4318c6520dd","Type":"ContainerStarted","Data":"8f28218962d7cb558da81442e683f21576d8574d47cab18795c0f2e4b876a455"} Nov 25 20:28:19 crc kubenswrapper[4775]: I1125 20:28:19.308014 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"c4569747201653b558a1323a0bd00f3656b2c0238aa26620a78f5c3890f16e75"} Nov 25 20:28:19 crc kubenswrapper[4775]: I1125 20:28:19.366923 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zw6f6" podStartSLOduration=2.922871808 podStartE2EDuration="5.366892987s" podCreationTimestamp="2025-11-25 20:28:14 +0000 UTC" firstStartedPulling="2025-11-25 20:28:16.259338115 +0000 UTC m=+3278.175700481" lastFinishedPulling="2025-11-25 20:28:18.703359304 +0000 UTC m=+3280.619721660" observedRunningTime="2025-11-25 20:28:19.341407735 +0000 UTC m=+3281.257770101" watchObservedRunningTime="2025-11-25 20:28:19.366892987 +0000 UTC m=+3281.283255373" Nov 25 20:28:20 crc kubenswrapper[4775]: I1125 20:28:20.324526 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w9h7" event={"ID":"457add6c-3383-4639-ba29-ce0ac248cad9","Type":"ContainerStarted","Data":"ed7b3647a3c5d0bdd8aa63fb0c75ca25e991277428a812d045d5807cc52453c2"} Nov 25 20:28:21 crc kubenswrapper[4775]: I1125 20:28:21.339604 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" containerID="c4569747201653b558a1323a0bd00f3656b2c0238aa26620a78f5c3890f16e75" exitCode=1 Nov 25 20:28:21 crc kubenswrapper[4775]: I1125 20:28:21.339694 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerDied","Data":"c4569747201653b558a1323a0bd00f3656b2c0238aa26620a78f5c3890f16e75"} Nov 25 20:28:21 crc kubenswrapper[4775]: I1125 20:28:21.339773 4775 scope.go:117] "RemoveContainer" containerID="12fe23d6c1cd6728bd58c1dc3e65c46ac2ffd1e717f2426dbb3da30dc7c97d35" Nov 25 20:28:21 crc kubenswrapper[4775]: I1125 20:28:21.340991 4775 scope.go:117] "RemoveContainer" containerID="c4569747201653b558a1323a0bd00f3656b2c0238aa26620a78f5c3890f16e75" Nov 25 20:28:21 crc kubenswrapper[4775]: E1125 20:28:21.341442 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:28:22 crc kubenswrapper[4775]: I1125 20:28:22.359829 4775 generic.go:334] "Generic (PLEG): container finished" podID="457add6c-3383-4639-ba29-ce0ac248cad9" containerID="ed7b3647a3c5d0bdd8aa63fb0c75ca25e991277428a812d045d5807cc52453c2" exitCode=0 Nov 25 20:28:22 crc kubenswrapper[4775]: I1125 20:28:22.359896 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w9h7" event={"ID":"457add6c-3383-4639-ba29-ce0ac248cad9","Type":"ContainerDied","Data":"ed7b3647a3c5d0bdd8aa63fb0c75ca25e991277428a812d045d5807cc52453c2"} Nov 25 20:28:22 crc kubenswrapper[4775]: I1125 20:28:22.363024 4775 generic.go:334] "Generic (PLEG): container finished" podID="f51b9ade-ab6c-45cd-86db-617278d3b57d" containerID="e30302739d244eba72304cffaf628dbfc21ec1c87b151f88ced7c38b572d3d22" exitCode=0 Nov 25 20:28:22 crc kubenswrapper[4775]: I1125 20:28:22.363068 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnnsw" event={"ID":"f51b9ade-ab6c-45cd-86db-617278d3b57d","Type":"ContainerDied","Data":"e30302739d244eba72304cffaf628dbfc21ec1c87b151f88ced7c38b572d3d22"} Nov 25 20:28:23 crc kubenswrapper[4775]: I1125 20:28:23.104805 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:28:23 crc kubenswrapper[4775]: I1125 20:28:23.105169 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:28:23 crc kubenswrapper[4775]: I1125 20:28:23.105183 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:28:23 crc kubenswrapper[4775]: I1125 20:28:23.106020 4775 scope.go:117] "RemoveContainer" containerID="c4569747201653b558a1323a0bd00f3656b2c0238aa26620a78f5c3890f16e75" Nov 25 20:28:23 crc kubenswrapper[4775]: E1125 20:28:23.106363 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:28:23 crc kubenswrapper[4775]: I1125 20:28:23.188625 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:28:23 crc kubenswrapper[4775]: I1125 20:28:23.188866 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:28:23 crc kubenswrapper[4775]: I1125 20:28:23.375968 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnnsw" event={"ID":"f51b9ade-ab6c-45cd-86db-617278d3b57d","Type":"ContainerStarted","Data":"e618add812cf5f1765066e975ebdb6c366f3a7db4d9b4497445e0c477924ac48"} Nov 25 20:28:23 crc kubenswrapper[4775]: I1125 20:28:23.378595 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w9h7" event={"ID":"457add6c-3383-4639-ba29-ce0ac248cad9","Type":"ContainerStarted","Data":"5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634"} Nov 25 20:28:23 crc kubenswrapper[4775]: I1125 20:28:23.407586 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mnnsw" podStartSLOduration=2.898392008 podStartE2EDuration="7.407565669s" podCreationTimestamp="2025-11-25 20:28:16 +0000 UTC" firstStartedPulling="2025-11-25 20:28:18.289379235 +0000 UTC m=+3280.205741601" lastFinishedPulling="2025-11-25 20:28:22.798552886 +0000 UTC m=+3284.714915262" observedRunningTime="2025-11-25 20:28:23.406580512 +0000 UTC m=+3285.322942918" watchObservedRunningTime="2025-11-25 20:28:23.407565669 +0000 UTC m=+3285.323928045" Nov 25 20:28:23 crc kubenswrapper[4775]: I1125 20:28:23.429793 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6w9h7" podStartSLOduration=2.838449638 podStartE2EDuration="6.429761061s" podCreationTimestamp="2025-11-25 20:28:17 +0000 UTC" firstStartedPulling="2025-11-25 20:28:19.299163008 +0000 UTC m=+3281.215525384" lastFinishedPulling="2025-11-25 20:28:22.890474441 +0000 UTC m=+3284.806836807" observedRunningTime="2025-11-25 20:28:23.427158121 +0000 UTC m=+3285.343520507" watchObservedRunningTime="2025-11-25 20:28:23.429761061 +0000 UTC m=+3285.346123467" Nov 25 20:28:24 crc kubenswrapper[4775]: I1125 20:28:24.892259 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:24 crc kubenswrapper[4775]: I1125 20:28:24.892598 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:24 crc kubenswrapper[4775]: I1125 20:28:24.951684 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:25 crc kubenswrapper[4775]: I1125 20:28:25.444208 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:27 crc kubenswrapper[4775]: I1125 20:28:27.129695 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:27 crc kubenswrapper[4775]: I1125 20:28:27.129759 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:27 crc kubenswrapper[4775]: I1125 20:28:27.556775 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zw6f6"] Nov 25 20:28:27 crc kubenswrapper[4775]: I1125 20:28:27.557126 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zw6f6" podUID="57bb9a79-c85a-459a-8407-a4318c6520dd" containerName="registry-server" containerID="cri-o://8f28218962d7cb558da81442e683f21576d8574d47cab18795c0f2e4b876a455" gracePeriod=2 Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.024995 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.084232 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57bb9a79-c85a-459a-8407-a4318c6520dd-catalog-content\") pod \"57bb9a79-c85a-459a-8407-a4318c6520dd\" (UID: \"57bb9a79-c85a-459a-8407-a4318c6520dd\") " Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.084303 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xds86\" (UniqueName: \"kubernetes.io/projected/57bb9a79-c85a-459a-8407-a4318c6520dd-kube-api-access-xds86\") pod \"57bb9a79-c85a-459a-8407-a4318c6520dd\" (UID: \"57bb9a79-c85a-459a-8407-a4318c6520dd\") " Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.084362 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57bb9a79-c85a-459a-8407-a4318c6520dd-utilities\") pod \"57bb9a79-c85a-459a-8407-a4318c6520dd\" (UID: \"57bb9a79-c85a-459a-8407-a4318c6520dd\") " Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.085686 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57bb9a79-c85a-459a-8407-a4318c6520dd-utilities" (OuterVolumeSpecName: "utilities") pod "57bb9a79-c85a-459a-8407-a4318c6520dd" (UID: "57bb9a79-c85a-459a-8407-a4318c6520dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.089502 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57bb9a79-c85a-459a-8407-a4318c6520dd-kube-api-access-xds86" (OuterVolumeSpecName: "kube-api-access-xds86") pod "57bb9a79-c85a-459a-8407-a4318c6520dd" (UID: "57bb9a79-c85a-459a-8407-a4318c6520dd"). InnerVolumeSpecName "kube-api-access-xds86". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.101344 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57bb9a79-c85a-459a-8407-a4318c6520dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57bb9a79-c85a-459a-8407-a4318c6520dd" (UID: "57bb9a79-c85a-459a-8407-a4318c6520dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.158219 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.160058 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.186880 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57bb9a79-c85a-459a-8407-a4318c6520dd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.186963 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xds86\" (UniqueName: \"kubernetes.io/projected/57bb9a79-c85a-459a-8407-a4318c6520dd-kube-api-access-xds86\") on node \"crc\" DevicePath \"\"" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.186983 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57bb9a79-c85a-459a-8407-a4318c6520dd-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.201409 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mnnsw" podUID="f51b9ade-ab6c-45cd-86db-617278d3b57d" containerName="registry-server" probeResult="failure" output=< Nov 25 20:28:28 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Nov 25 20:28:28 crc kubenswrapper[4775]: > Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.233808 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.425535 4775 generic.go:334] "Generic (PLEG): container finished" podID="57bb9a79-c85a-459a-8407-a4318c6520dd" containerID="8f28218962d7cb558da81442e683f21576d8574d47cab18795c0f2e4b876a455" exitCode=0 Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.425637 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zw6f6" event={"ID":"57bb9a79-c85a-459a-8407-a4318c6520dd","Type":"ContainerDied","Data":"8f28218962d7cb558da81442e683f21576d8574d47cab18795c0f2e4b876a455"} Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.425726 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zw6f6" event={"ID":"57bb9a79-c85a-459a-8407-a4318c6520dd","Type":"ContainerDied","Data":"885e2d18dc578a00d9b5a47cc576bba8e5d8155952d6b32cb96fddd98e587172"} Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.425779 4775 scope.go:117] "RemoveContainer" containerID="8f28218962d7cb558da81442e683f21576d8574d47cab18795c0f2e4b876a455" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.425773 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zw6f6" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.457831 4775 scope.go:117] "RemoveContainer" containerID="ba44c7d54e7f270323fc3362c395be88facaf7c19a94052ab20eec883a66754e" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.474539 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zw6f6"] Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.484639 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zw6f6"] Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.490592 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.517138 4775 scope.go:117] "RemoveContainer" containerID="69d9421e593f7e5599a1abb1f6dc4f49b04f71c29d635fbbf7b17ef6588bf8df" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.549131 4775 scope.go:117] "RemoveContainer" containerID="8f28218962d7cb558da81442e683f21576d8574d47cab18795c0f2e4b876a455" Nov 25 20:28:28 crc kubenswrapper[4775]: E1125 20:28:28.551125 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f28218962d7cb558da81442e683f21576d8574d47cab18795c0f2e4b876a455\": container with ID starting with 8f28218962d7cb558da81442e683f21576d8574d47cab18795c0f2e4b876a455 not found: ID does not exist" containerID="8f28218962d7cb558da81442e683f21576d8574d47cab18795c0f2e4b876a455" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.551164 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f28218962d7cb558da81442e683f21576d8574d47cab18795c0f2e4b876a455"} err="failed to get container status \"8f28218962d7cb558da81442e683f21576d8574d47cab18795c0f2e4b876a455\": rpc error: code = NotFound desc = could not find container \"8f28218962d7cb558da81442e683f21576d8574d47cab18795c0f2e4b876a455\": container with ID starting with 8f28218962d7cb558da81442e683f21576d8574d47cab18795c0f2e4b876a455 not found: ID does not exist" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.551191 4775 scope.go:117] "RemoveContainer" containerID="ba44c7d54e7f270323fc3362c395be88facaf7c19a94052ab20eec883a66754e" Nov 25 20:28:28 crc kubenswrapper[4775]: E1125 20:28:28.551742 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba44c7d54e7f270323fc3362c395be88facaf7c19a94052ab20eec883a66754e\": container with ID starting with ba44c7d54e7f270323fc3362c395be88facaf7c19a94052ab20eec883a66754e not found: ID does not exist" containerID="ba44c7d54e7f270323fc3362c395be88facaf7c19a94052ab20eec883a66754e" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.551782 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba44c7d54e7f270323fc3362c395be88facaf7c19a94052ab20eec883a66754e"} err="failed to get container status \"ba44c7d54e7f270323fc3362c395be88facaf7c19a94052ab20eec883a66754e\": rpc error: code = NotFound desc = could not find container \"ba44c7d54e7f270323fc3362c395be88facaf7c19a94052ab20eec883a66754e\": container with ID starting with ba44c7d54e7f270323fc3362c395be88facaf7c19a94052ab20eec883a66754e not found: ID does not exist" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.551807 4775 scope.go:117] "RemoveContainer" containerID="69d9421e593f7e5599a1abb1f6dc4f49b04f71c29d635fbbf7b17ef6588bf8df" Nov 25 20:28:28 crc kubenswrapper[4775]: E1125 20:28:28.552200 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69d9421e593f7e5599a1abb1f6dc4f49b04f71c29d635fbbf7b17ef6588bf8df\": container with ID starting with 69d9421e593f7e5599a1abb1f6dc4f49b04f71c29d635fbbf7b17ef6588bf8df not found: ID does not exist" containerID="69d9421e593f7e5599a1abb1f6dc4f49b04f71c29d635fbbf7b17ef6588bf8df" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.552228 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d9421e593f7e5599a1abb1f6dc4f49b04f71c29d635fbbf7b17ef6588bf8df"} err="failed to get container status \"69d9421e593f7e5599a1abb1f6dc4f49b04f71c29d635fbbf7b17ef6588bf8df\": rpc error: code = NotFound desc = could not find container \"69d9421e593f7e5599a1abb1f6dc4f49b04f71c29d635fbbf7b17ef6588bf8df\": container with ID starting with 69d9421e593f7e5599a1abb1f6dc4f49b04f71c29d635fbbf7b17ef6588bf8df not found: ID does not exist" Nov 25 20:28:28 crc kubenswrapper[4775]: I1125 20:28:28.861211 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57bb9a79-c85a-459a-8407-a4318c6520dd" path="/var/lib/kubelet/pods/57bb9a79-c85a-459a-8407-a4318c6520dd/volumes" Nov 25 20:28:32 crc kubenswrapper[4775]: I1125 20:28:32.208049 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:28:32 crc kubenswrapper[4775]: I1125 20:28:32.208277 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:28:32 crc kubenswrapper[4775]: I1125 20:28:32.209812 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:28:32 crc kubenswrapper[4775]: I1125 20:28:32.210976 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"84f752c473a775a41f4257c51ad391db81cc3f86a1d166e3c5e8dac508d7c890"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:28:32 crc kubenswrapper[4775]: I1125 20:28:32.211056 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://84f752c473a775a41f4257c51ad391db81cc3f86a1d166e3c5e8dac508d7c890" gracePeriod=30 Nov 25 20:28:32 crc kubenswrapper[4775]: I1125 20:28:32.218802 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:28:32 crc kubenswrapper[4775]: I1125 20:28:32.963301 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6w9h7"] Nov 25 20:28:32 crc kubenswrapper[4775]: I1125 20:28:32.964312 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6w9h7" podUID="457add6c-3383-4639-ba29-ce0ac248cad9" containerName="registry-server" containerID="cri-o://5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634" gracePeriod=2 Nov 25 20:28:33 crc kubenswrapper[4775]: E1125 20:28:33.241983 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod457add6c_3383_4639_ba29_ce0ac248cad9.slice/crio-5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod457add6c_3383_4639_ba29_ce0ac248cad9.slice/crio-conmon-5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634.scope\": RecentStats: unable to find data in memory cache]" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.457266 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.481401 4775 generic.go:334] "Generic (PLEG): container finished" podID="457add6c-3383-4639-ba29-ce0ac248cad9" containerID="5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634" exitCode=0 Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.481445 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w9h7" event={"ID":"457add6c-3383-4639-ba29-ce0ac248cad9","Type":"ContainerDied","Data":"5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634"} Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.481472 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w9h7" event={"ID":"457add6c-3383-4639-ba29-ce0ac248cad9","Type":"ContainerDied","Data":"c819a04977ff40f33e56706ffb6e6b22fef3c643069a9b7dfec2f98ad0038b69"} Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.481488 4775 scope.go:117] "RemoveContainer" containerID="5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.481618 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w9h7" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.501368 4775 scope.go:117] "RemoveContainer" containerID="ed7b3647a3c5d0bdd8aa63fb0c75ca25e991277428a812d045d5807cc52453c2" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.525566 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/457add6c-3383-4639-ba29-ce0ac248cad9-utilities\") pod \"457add6c-3383-4639-ba29-ce0ac248cad9\" (UID: \"457add6c-3383-4639-ba29-ce0ac248cad9\") " Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.525686 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/457add6c-3383-4639-ba29-ce0ac248cad9-catalog-content\") pod \"457add6c-3383-4639-ba29-ce0ac248cad9\" (UID: \"457add6c-3383-4639-ba29-ce0ac248cad9\") " Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.525770 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgf4w\" (UniqueName: \"kubernetes.io/projected/457add6c-3383-4639-ba29-ce0ac248cad9-kube-api-access-wgf4w\") pod \"457add6c-3383-4639-ba29-ce0ac248cad9\" (UID: \"457add6c-3383-4639-ba29-ce0ac248cad9\") " Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.526896 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/457add6c-3383-4639-ba29-ce0ac248cad9-utilities" (OuterVolumeSpecName: "utilities") pod "457add6c-3383-4639-ba29-ce0ac248cad9" (UID: "457add6c-3383-4639-ba29-ce0ac248cad9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.532847 4775 scope.go:117] "RemoveContainer" containerID="d0e0a2ad4e7ffe7513b3c5d5e42686406286d26c425b1a596b269a3c40a2541b" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.533617 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/457add6c-3383-4639-ba29-ce0ac248cad9-kube-api-access-wgf4w" (OuterVolumeSpecName: "kube-api-access-wgf4w") pod "457add6c-3383-4639-ba29-ce0ac248cad9" (UID: "457add6c-3383-4639-ba29-ce0ac248cad9"). InnerVolumeSpecName "kube-api-access-wgf4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.597750 4775 scope.go:117] "RemoveContainer" containerID="5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634" Nov 25 20:28:33 crc kubenswrapper[4775]: E1125 20:28:33.600258 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634\": container with ID starting with 5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634 not found: ID does not exist" containerID="5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.600292 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634"} err="failed to get container status \"5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634\": rpc error: code = NotFound desc = could not find container \"5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634\": container with ID starting with 5e2de97c1227287b10ff7ee076321574a071cde69d9222df8753e12bf9a9b634 not found: ID does not exist" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.600313 4775 scope.go:117] "RemoveContainer" containerID="ed7b3647a3c5d0bdd8aa63fb0c75ca25e991277428a812d045d5807cc52453c2" Nov 25 20:28:33 crc kubenswrapper[4775]: E1125 20:28:33.600560 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7b3647a3c5d0bdd8aa63fb0c75ca25e991277428a812d045d5807cc52453c2\": container with ID starting with ed7b3647a3c5d0bdd8aa63fb0c75ca25e991277428a812d045d5807cc52453c2 not found: ID does not exist" containerID="ed7b3647a3c5d0bdd8aa63fb0c75ca25e991277428a812d045d5807cc52453c2" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.600587 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7b3647a3c5d0bdd8aa63fb0c75ca25e991277428a812d045d5807cc52453c2"} err="failed to get container status \"ed7b3647a3c5d0bdd8aa63fb0c75ca25e991277428a812d045d5807cc52453c2\": rpc error: code = NotFound desc = could not find container \"ed7b3647a3c5d0bdd8aa63fb0c75ca25e991277428a812d045d5807cc52453c2\": container with ID starting with ed7b3647a3c5d0bdd8aa63fb0c75ca25e991277428a812d045d5807cc52453c2 not found: ID does not exist" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.600600 4775 scope.go:117] "RemoveContainer" containerID="d0e0a2ad4e7ffe7513b3c5d5e42686406286d26c425b1a596b269a3c40a2541b" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.600707 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/457add6c-3383-4639-ba29-ce0ac248cad9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "457add6c-3383-4639-ba29-ce0ac248cad9" (UID: "457add6c-3383-4639-ba29-ce0ac248cad9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:28:33 crc kubenswrapper[4775]: E1125 20:28:33.600844 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e0a2ad4e7ffe7513b3c5d5e42686406286d26c425b1a596b269a3c40a2541b\": container with ID starting with d0e0a2ad4e7ffe7513b3c5d5e42686406286d26c425b1a596b269a3c40a2541b not found: ID does not exist" containerID="d0e0a2ad4e7ffe7513b3c5d5e42686406286d26c425b1a596b269a3c40a2541b" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.600867 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e0a2ad4e7ffe7513b3c5d5e42686406286d26c425b1a596b269a3c40a2541b"} err="failed to get container status \"d0e0a2ad4e7ffe7513b3c5d5e42686406286d26c425b1a596b269a3c40a2541b\": rpc error: code = NotFound desc = could not find container \"d0e0a2ad4e7ffe7513b3c5d5e42686406286d26c425b1a596b269a3c40a2541b\": container with ID starting with d0e0a2ad4e7ffe7513b3c5d5e42686406286d26c425b1a596b269a3c40a2541b not found: ID does not exist" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.628115 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/457add6c-3383-4639-ba29-ce0ac248cad9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.628145 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/457add6c-3383-4639-ba29-ce0ac248cad9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.628158 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgf4w\" (UniqueName: \"kubernetes.io/projected/457add6c-3383-4639-ba29-ce0ac248cad9-kube-api-access-wgf4w\") on node \"crc\" DevicePath \"\"" Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.823500 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6w9h7"] Nov 25 20:28:33 crc kubenswrapper[4775]: I1125 20:28:33.833784 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6w9h7"] Nov 25 20:28:34 crc kubenswrapper[4775]: I1125 20:28:34.863970 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="457add6c-3383-4639-ba29-ce0ac248cad9" path="/var/lib/kubelet/pods/457add6c-3383-4639-ba29-ce0ac248cad9/volumes" Nov 25 20:28:35 crc kubenswrapper[4775]: I1125 20:28:35.507924 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="84f752c473a775a41f4257c51ad391db81cc3f86a1d166e3c5e8dac508d7c890" exitCode=0 Nov 25 20:28:35 crc kubenswrapper[4775]: I1125 20:28:35.507996 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"84f752c473a775a41f4257c51ad391db81cc3f86a1d166e3c5e8dac508d7c890"} Nov 25 20:28:35 crc kubenswrapper[4775]: I1125 20:28:35.508404 4775 scope.go:117] "RemoveContainer" containerID="088817aeb7cfcbe4e4d79e8223e322f2dd8509e1901ac1b6f753c2854b85194e" Nov 25 20:28:36 crc kubenswrapper[4775]: I1125 20:28:36.524080 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"e0305973d1a3a716267cab532e3e78264d897eeea2b5daf2782dc3f846de6ce0"} Nov 25 20:28:36 crc kubenswrapper[4775]: I1125 20:28:36.524621 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:28:37 crc kubenswrapper[4775]: I1125 20:28:37.847588 4775 scope.go:117] "RemoveContainer" containerID="c4569747201653b558a1323a0bd00f3656b2c0238aa26620a78f5c3890f16e75" Nov 25 20:28:37 crc kubenswrapper[4775]: E1125 20:28:37.848090 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:28:38 crc kubenswrapper[4775]: I1125 20:28:38.208879 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mnnsw" podUID="f51b9ade-ab6c-45cd-86db-617278d3b57d" containerName="registry-server" probeResult="failure" output=< Nov 25 20:28:38 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Nov 25 20:28:38 crc kubenswrapper[4775]: > Nov 25 20:28:47 crc kubenswrapper[4775]: I1125 20:28:47.198460 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:47 crc kubenswrapper[4775]: I1125 20:28:47.257988 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:48 crc kubenswrapper[4775]: I1125 20:28:48.173500 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mnnsw"] Nov 25 20:28:48 crc kubenswrapper[4775]: I1125 20:28:48.664008 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mnnsw" podUID="f51b9ade-ab6c-45cd-86db-617278d3b57d" containerName="registry-server" containerID="cri-o://e618add812cf5f1765066e975ebdb6c366f3a7db4d9b4497445e0c477924ac48" gracePeriod=2 Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.158099 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.260644 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvvwr\" (UniqueName: \"kubernetes.io/projected/f51b9ade-ab6c-45cd-86db-617278d3b57d-kube-api-access-jvvwr\") pod \"f51b9ade-ab6c-45cd-86db-617278d3b57d\" (UID: \"f51b9ade-ab6c-45cd-86db-617278d3b57d\") " Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.260849 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51b9ade-ab6c-45cd-86db-617278d3b57d-catalog-content\") pod \"f51b9ade-ab6c-45cd-86db-617278d3b57d\" (UID: \"f51b9ade-ab6c-45cd-86db-617278d3b57d\") " Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.260896 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51b9ade-ab6c-45cd-86db-617278d3b57d-utilities\") pod \"f51b9ade-ab6c-45cd-86db-617278d3b57d\" (UID: \"f51b9ade-ab6c-45cd-86db-617278d3b57d\") " Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.262224 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51b9ade-ab6c-45cd-86db-617278d3b57d-utilities" (OuterVolumeSpecName: "utilities") pod "f51b9ade-ab6c-45cd-86db-617278d3b57d" (UID: "f51b9ade-ab6c-45cd-86db-617278d3b57d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.268259 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51b9ade-ab6c-45cd-86db-617278d3b57d-kube-api-access-jvvwr" (OuterVolumeSpecName: "kube-api-access-jvvwr") pod "f51b9ade-ab6c-45cd-86db-617278d3b57d" (UID: "f51b9ade-ab6c-45cd-86db-617278d3b57d"). InnerVolumeSpecName "kube-api-access-jvvwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.362600 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51b9ade-ab6c-45cd-86db-617278d3b57d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.362624 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvvwr\" (UniqueName: \"kubernetes.io/projected/f51b9ade-ab6c-45cd-86db-617278d3b57d-kube-api-access-jvvwr\") on node \"crc\" DevicePath \"\"" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.374525 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51b9ade-ab6c-45cd-86db-617278d3b57d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f51b9ade-ab6c-45cd-86db-617278d3b57d" (UID: "f51b9ade-ab6c-45cd-86db-617278d3b57d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.465192 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51b9ade-ab6c-45cd-86db-617278d3b57d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.681626 4775 generic.go:334] "Generic (PLEG): container finished" podID="f51b9ade-ab6c-45cd-86db-617278d3b57d" containerID="e618add812cf5f1765066e975ebdb6c366f3a7db4d9b4497445e0c477924ac48" exitCode=0 Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.681723 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnnsw" event={"ID":"f51b9ade-ab6c-45cd-86db-617278d3b57d","Type":"ContainerDied","Data":"e618add812cf5f1765066e975ebdb6c366f3a7db4d9b4497445e0c477924ac48"} Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.682063 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mnnsw" event={"ID":"f51b9ade-ab6c-45cd-86db-617278d3b57d","Type":"ContainerDied","Data":"889478d715c0338176deb161c1fb271639d9e155ec489f120b6682d977f43475"} Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.681797 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mnnsw" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.682097 4775 scope.go:117] "RemoveContainer" containerID="e618add812cf5f1765066e975ebdb6c366f3a7db4d9b4497445e0c477924ac48" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.705270 4775 scope.go:117] "RemoveContainer" containerID="e30302739d244eba72304cffaf628dbfc21ec1c87b151f88ced7c38b572d3d22" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.718199 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mnnsw"] Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.726604 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mnnsw"] Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.747466 4775 scope.go:117] "RemoveContainer" containerID="464cc793419ee6b9e6d916119ec4042ea8fec003a7fa86f670ab8c725645b4d3" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.781600 4775 scope.go:117] "RemoveContainer" containerID="e618add812cf5f1765066e975ebdb6c366f3a7db4d9b4497445e0c477924ac48" Nov 25 20:28:49 crc kubenswrapper[4775]: E1125 20:28:49.782099 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e618add812cf5f1765066e975ebdb6c366f3a7db4d9b4497445e0c477924ac48\": container with ID starting with e618add812cf5f1765066e975ebdb6c366f3a7db4d9b4497445e0c477924ac48 not found: ID does not exist" containerID="e618add812cf5f1765066e975ebdb6c366f3a7db4d9b4497445e0c477924ac48" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.782143 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e618add812cf5f1765066e975ebdb6c366f3a7db4d9b4497445e0c477924ac48"} err="failed to get container status \"e618add812cf5f1765066e975ebdb6c366f3a7db4d9b4497445e0c477924ac48\": rpc error: code = NotFound desc = could not find container \"e618add812cf5f1765066e975ebdb6c366f3a7db4d9b4497445e0c477924ac48\": container with ID starting with e618add812cf5f1765066e975ebdb6c366f3a7db4d9b4497445e0c477924ac48 not found: ID does not exist" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.782169 4775 scope.go:117] "RemoveContainer" containerID="e30302739d244eba72304cffaf628dbfc21ec1c87b151f88ced7c38b572d3d22" Nov 25 20:28:49 crc kubenswrapper[4775]: E1125 20:28:49.782435 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e30302739d244eba72304cffaf628dbfc21ec1c87b151f88ced7c38b572d3d22\": container with ID starting with e30302739d244eba72304cffaf628dbfc21ec1c87b151f88ced7c38b572d3d22 not found: ID does not exist" containerID="e30302739d244eba72304cffaf628dbfc21ec1c87b151f88ced7c38b572d3d22" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.782471 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e30302739d244eba72304cffaf628dbfc21ec1c87b151f88ced7c38b572d3d22"} err="failed to get container status \"e30302739d244eba72304cffaf628dbfc21ec1c87b151f88ced7c38b572d3d22\": rpc error: code = NotFound desc = could not find container \"e30302739d244eba72304cffaf628dbfc21ec1c87b151f88ced7c38b572d3d22\": container with ID starting with e30302739d244eba72304cffaf628dbfc21ec1c87b151f88ced7c38b572d3d22 not found: ID does not exist" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.782499 4775 scope.go:117] "RemoveContainer" containerID="464cc793419ee6b9e6d916119ec4042ea8fec003a7fa86f670ab8c725645b4d3" Nov 25 20:28:49 crc kubenswrapper[4775]: E1125 20:28:49.782707 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"464cc793419ee6b9e6d916119ec4042ea8fec003a7fa86f670ab8c725645b4d3\": container with ID starting with 464cc793419ee6b9e6d916119ec4042ea8fec003a7fa86f670ab8c725645b4d3 not found: ID does not exist" containerID="464cc793419ee6b9e6d916119ec4042ea8fec003a7fa86f670ab8c725645b4d3" Nov 25 20:28:49 crc kubenswrapper[4775]: I1125 20:28:49.782735 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"464cc793419ee6b9e6d916119ec4042ea8fec003a7fa86f670ab8c725645b4d3"} err="failed to get container status \"464cc793419ee6b9e6d916119ec4042ea8fec003a7fa86f670ab8c725645b4d3\": rpc error: code = NotFound desc = could not find container \"464cc793419ee6b9e6d916119ec4042ea8fec003a7fa86f670ab8c725645b4d3\": container with ID starting with 464cc793419ee6b9e6d916119ec4042ea8fec003a7fa86f670ab8c725645b4d3 not found: ID does not exist" Nov 25 20:28:50 crc kubenswrapper[4775]: I1125 20:28:50.864066 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51b9ade-ab6c-45cd-86db-617278d3b57d" path="/var/lib/kubelet/pods/f51b9ade-ab6c-45cd-86db-617278d3b57d/volumes" Nov 25 20:28:51 crc kubenswrapper[4775]: I1125 20:28:51.846970 4775 scope.go:117] "RemoveContainer" containerID="c4569747201653b558a1323a0bd00f3656b2c0238aa26620a78f5c3890f16e75" Nov 25 20:28:51 crc kubenswrapper[4775]: E1125 20:28:51.847696 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:28:53 crc kubenswrapper[4775]: I1125 20:28:53.312089 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:28:53 crc kubenswrapper[4775]: I1125 20:28:53.312165 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:03 crc kubenswrapper[4775]: I1125 20:29:03.134625 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:03 crc kubenswrapper[4775]: I1125 20:29:03.221431 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:04 crc kubenswrapper[4775]: I1125 20:29:04.848010 4775 scope.go:117] "RemoveContainer" containerID="c4569747201653b558a1323a0bd00f3656b2c0238aa26620a78f5c3890f16e75" Nov 25 20:29:04 crc kubenswrapper[4775]: E1125 20:29:04.849138 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:29:12 crc kubenswrapper[4775]: I1125 20:29:12.206077 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:12 crc kubenswrapper[4775]: I1125 20:29:12.207673 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:12 crc kubenswrapper[4775]: I1125 20:29:12.207716 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:29:12 crc kubenswrapper[4775]: I1125 20:29:12.208611 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"e0305973d1a3a716267cab532e3e78264d897eeea2b5daf2782dc3f846de6ce0"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:29:12 crc kubenswrapper[4775]: I1125 20:29:12.208688 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://e0305973d1a3a716267cab532e3e78264d897eeea2b5daf2782dc3f846de6ce0" gracePeriod=30 Nov 25 20:29:12 crc kubenswrapper[4775]: I1125 20:29:12.215048 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="Get \"https://10.217.0.245:8786/healthcheck\": EOF" Nov 25 20:29:15 crc kubenswrapper[4775]: I1125 20:29:15.847428 4775 scope.go:117] "RemoveContainer" containerID="c4569747201653b558a1323a0bd00f3656b2c0238aa26620a78f5c3890f16e75" Nov 25 20:29:15 crc kubenswrapper[4775]: E1125 20:29:15.848439 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:29:16 crc kubenswrapper[4775]: I1125 20:29:16.004337 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="e0305973d1a3a716267cab532e3e78264d897eeea2b5daf2782dc3f846de6ce0" exitCode=0 Nov 25 20:29:16 crc kubenswrapper[4775]: I1125 20:29:16.004390 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"e0305973d1a3a716267cab532e3e78264d897eeea2b5daf2782dc3f846de6ce0"} Nov 25 20:29:16 crc kubenswrapper[4775]: I1125 20:29:16.004464 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"8b3d72e84cd0e1b99f400b5378959154a55dc4a65f346c9d7559b382a4e240d7"} Nov 25 20:29:16 crc kubenswrapper[4775]: I1125 20:29:16.004493 4775 scope.go:117] "RemoveContainer" containerID="84f752c473a775a41f4257c51ad391db81cc3f86a1d166e3c5e8dac508d7c890" Nov 25 20:29:16 crc kubenswrapper[4775]: I1125 20:29:16.004683 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:29:27 crc kubenswrapper[4775]: I1125 20:29:27.848310 4775 scope.go:117] "RemoveContainer" containerID="c4569747201653b558a1323a0bd00f3656b2c0238aa26620a78f5c3890f16e75" Nov 25 20:29:27 crc kubenswrapper[4775]: E1125 20:29:27.849216 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:29:33 crc kubenswrapper[4775]: I1125 20:29:33.217074 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:33 crc kubenswrapper[4775]: I1125 20:29:33.254429 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:42 crc kubenswrapper[4775]: I1125 20:29:42.847911 4775 scope.go:117] "RemoveContainer" containerID="c4569747201653b558a1323a0bd00f3656b2c0238aa26620a78f5c3890f16e75" Nov 25 20:29:43 crc kubenswrapper[4775]: I1125 20:29:43.116623 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:43 crc kubenswrapper[4775]: I1125 20:29:43.169137 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:44 crc kubenswrapper[4775]: I1125 20:29:44.331963 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361"} Nov 25 20:29:45 crc kubenswrapper[4775]: E1125 20:29:45.087555 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a88473d_4ba5_4147_bf60_128f0b7ea8f6.slice/crio-deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a88473d_4ba5_4147_bf60_128f0b7ea8f6.slice/crio-conmon-deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361.scope\": RecentStats: unable to find data in memory cache]" Nov 25 20:29:45 crc kubenswrapper[4775]: I1125 20:29:45.346329 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" exitCode=1 Nov 25 20:29:45 crc kubenswrapper[4775]: I1125 20:29:45.346400 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerDied","Data":"deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361"} Nov 25 20:29:45 crc kubenswrapper[4775]: I1125 20:29:45.346490 4775 scope.go:117] "RemoveContainer" containerID="c4569747201653b558a1323a0bd00f3656b2c0238aa26620a78f5c3890f16e75" Nov 25 20:29:45 crc kubenswrapper[4775]: I1125 20:29:45.348000 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:29:45 crc kubenswrapper[4775]: E1125 20:29:45.348863 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:29:52 crc kubenswrapper[4775]: I1125 20:29:52.217323 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:52 crc kubenswrapper[4775]: I1125 20:29:52.217925 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:29:52 crc kubenswrapper[4775]: I1125 20:29:52.218792 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"8b3d72e84cd0e1b99f400b5378959154a55dc4a65f346c9d7559b382a4e240d7"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:29:52 crc kubenswrapper[4775]: I1125 20:29:52.218831 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://8b3d72e84cd0e1b99f400b5378959154a55dc4a65f346c9d7559b382a4e240d7" gracePeriod=30 Nov 25 20:29:52 crc kubenswrapper[4775]: I1125 20:29:52.219680 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:52 crc kubenswrapper[4775]: I1125 20:29:52.225140 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:53 crc kubenswrapper[4775]: I1125 20:29:53.105317 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:29:53 crc kubenswrapper[4775]: I1125 20:29:53.105712 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:29:53 crc kubenswrapper[4775]: I1125 20:29:53.106494 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:29:53 crc kubenswrapper[4775]: E1125 20:29:53.106788 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:29:55 crc kubenswrapper[4775]: E1125 20:29:55.461712 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:29:55 crc kubenswrapper[4775]: I1125 20:29:55.477243 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="8b3d72e84cd0e1b99f400b5378959154a55dc4a65f346c9d7559b382a4e240d7" exitCode=0 Nov 25 20:29:55 crc kubenswrapper[4775]: I1125 20:29:55.477297 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"8b3d72e84cd0e1b99f400b5378959154a55dc4a65f346c9d7559b382a4e240d7"} Nov 25 20:29:55 crc kubenswrapper[4775]: I1125 20:29:55.477361 4775 scope.go:117] "RemoveContainer" containerID="e0305973d1a3a716267cab532e3e78264d897eeea2b5daf2782dc3f846de6ce0" Nov 25 20:29:55 crc kubenswrapper[4775]: I1125 20:29:55.478143 4775 scope.go:117] "RemoveContainer" containerID="8b3d72e84cd0e1b99f400b5378959154a55dc4a65f346c9d7559b382a4e240d7" Nov 25 20:29:55 crc kubenswrapper[4775]: E1125 20:29:55.478408 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.147574 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n"] Nov 25 20:30:00 crc kubenswrapper[4775]: E1125 20:30:00.148460 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57bb9a79-c85a-459a-8407-a4318c6520dd" containerName="registry-server" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.148475 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="57bb9a79-c85a-459a-8407-a4318c6520dd" containerName="registry-server" Nov 25 20:30:00 crc kubenswrapper[4775]: E1125 20:30:00.148495 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457add6c-3383-4639-ba29-ce0ac248cad9" containerName="registry-server" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.148503 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="457add6c-3383-4639-ba29-ce0ac248cad9" containerName="registry-server" Nov 25 20:30:00 crc kubenswrapper[4775]: E1125 20:30:00.148519 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57bb9a79-c85a-459a-8407-a4318c6520dd" containerName="extract-content" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.148527 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="57bb9a79-c85a-459a-8407-a4318c6520dd" containerName="extract-content" Nov 25 20:30:00 crc kubenswrapper[4775]: E1125 20:30:00.148555 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457add6c-3383-4639-ba29-ce0ac248cad9" containerName="extract-content" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.148563 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="457add6c-3383-4639-ba29-ce0ac248cad9" containerName="extract-content" Nov 25 20:30:00 crc kubenswrapper[4775]: E1125 20:30:00.148579 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51b9ade-ab6c-45cd-86db-617278d3b57d" containerName="extract-content" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.148586 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51b9ade-ab6c-45cd-86db-617278d3b57d" containerName="extract-content" Nov 25 20:30:00 crc kubenswrapper[4775]: E1125 20:30:00.148605 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51b9ade-ab6c-45cd-86db-617278d3b57d" containerName="extract-utilities" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.148612 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51b9ade-ab6c-45cd-86db-617278d3b57d" containerName="extract-utilities" Nov 25 20:30:00 crc kubenswrapper[4775]: E1125 20:30:00.148624 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457add6c-3383-4639-ba29-ce0ac248cad9" containerName="extract-utilities" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.148631 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="457add6c-3383-4639-ba29-ce0ac248cad9" containerName="extract-utilities" Nov 25 20:30:00 crc kubenswrapper[4775]: E1125 20:30:00.148640 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51b9ade-ab6c-45cd-86db-617278d3b57d" containerName="registry-server" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.148666 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51b9ade-ab6c-45cd-86db-617278d3b57d" containerName="registry-server" Nov 25 20:30:00 crc kubenswrapper[4775]: E1125 20:30:00.148678 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57bb9a79-c85a-459a-8407-a4318c6520dd" containerName="extract-utilities" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.148685 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="57bb9a79-c85a-459a-8407-a4318c6520dd" containerName="extract-utilities" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.148878 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="457add6c-3383-4639-ba29-ce0ac248cad9" containerName="registry-server" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.148898 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="57bb9a79-c85a-459a-8407-a4318c6520dd" containerName="registry-server" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.148908 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51b9ade-ab6c-45cd-86db-617278d3b57d" containerName="registry-server" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.149520 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.151465 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.151636 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.169175 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n"] Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.262408 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-config-volume\") pod \"collect-profiles-29401710-xk27n\" (UID: \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.262456 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-secret-volume\") pod \"collect-profiles-29401710-xk27n\" (UID: \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.262496 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p9b4\" (UniqueName: \"kubernetes.io/projected/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-kube-api-access-5p9b4\") pod \"collect-profiles-29401710-xk27n\" (UID: \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.365144 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-config-volume\") pod \"collect-profiles-29401710-xk27n\" (UID: \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.365220 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-secret-volume\") pod \"collect-profiles-29401710-xk27n\" (UID: \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.365289 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p9b4\" (UniqueName: \"kubernetes.io/projected/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-kube-api-access-5p9b4\") pod \"collect-profiles-29401710-xk27n\" (UID: \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.366774 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-config-volume\") pod \"collect-profiles-29401710-xk27n\" (UID: \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.375546 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-secret-volume\") pod \"collect-profiles-29401710-xk27n\" (UID: \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.385579 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p9b4\" (UniqueName: \"kubernetes.io/projected/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-kube-api-access-5p9b4\") pod \"collect-profiles-29401710-xk27n\" (UID: \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.472929 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" Nov 25 20:30:00 crc kubenswrapper[4775]: I1125 20:30:00.940274 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n"] Nov 25 20:30:00 crc kubenswrapper[4775]: W1125 20:30:00.944545 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95a9c7dc_e57a_43c3_9c86_6d0ae6b09ce6.slice/crio-41b6b36f4377af9b612744f7456d1528c0589ad195a92e7fa784e7c50f9014df WatchSource:0}: Error finding container 41b6b36f4377af9b612744f7456d1528c0589ad195a92e7fa784e7c50f9014df: Status 404 returned error can't find the container with id 41b6b36f4377af9b612744f7456d1528c0589ad195a92e7fa784e7c50f9014df Nov 25 20:30:01 crc kubenswrapper[4775]: I1125 20:30:01.562559 4775 generic.go:334] "Generic (PLEG): container finished" podID="95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6" containerID="7b7848b04a6d83725de4f4f87c79c8a79300e95d337fd950a0e239e1055bcd8c" exitCode=0 Nov 25 20:30:01 crc kubenswrapper[4775]: I1125 20:30:01.562681 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" event={"ID":"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6","Type":"ContainerDied","Data":"7b7848b04a6d83725de4f4f87c79c8a79300e95d337fd950a0e239e1055bcd8c"} Nov 25 20:30:01 crc kubenswrapper[4775]: I1125 20:30:01.562908 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" event={"ID":"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6","Type":"ContainerStarted","Data":"41b6b36f4377af9b612744f7456d1528c0589ad195a92e7fa784e7c50f9014df"} Nov 25 20:30:02 crc kubenswrapper[4775]: I1125 20:30:02.993337 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.104527 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.105318 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:30:03 crc kubenswrapper[4775]: E1125 20:30:03.105709 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.123403 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-config-volume\") pod \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\" (UID: \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\") " Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.123564 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p9b4\" (UniqueName: \"kubernetes.io/projected/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-kube-api-access-5p9b4\") pod \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\" (UID: \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\") " Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.123755 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-secret-volume\") pod \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\" (UID: \"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6\") " Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.125478 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-config-volume" (OuterVolumeSpecName: "config-volume") pod "95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6" (UID: "95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.129524 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-kube-api-access-5p9b4" (OuterVolumeSpecName: "kube-api-access-5p9b4") pod "95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6" (UID: "95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6"). InnerVolumeSpecName "kube-api-access-5p9b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.133844 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6" (UID: "95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.226375 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.226407 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p9b4\" (UniqueName: \"kubernetes.io/projected/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-kube-api-access-5p9b4\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.226418 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.588133 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" event={"ID":"95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6","Type":"ContainerDied","Data":"41b6b36f4377af9b612744f7456d1528c0589ad195a92e7fa784e7c50f9014df"} Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.588492 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b6b36f4377af9b612744f7456d1528c0589ad195a92e7fa784e7c50f9014df" Nov 25 20:30:03 crc kubenswrapper[4775]: I1125 20:30:03.588207 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-xk27n" Nov 25 20:30:04 crc kubenswrapper[4775]: I1125 20:30:04.092568 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn"] Nov 25 20:30:04 crc kubenswrapper[4775]: I1125 20:30:04.105084 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401665-xhlbn"] Nov 25 20:30:04 crc kubenswrapper[4775]: I1125 20:30:04.869515 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e47008dd-7c5a-45e1-af24-e6726af501ea" path="/var/lib/kubelet/pods/e47008dd-7c5a-45e1-af24-e6726af501ea/volumes" Nov 25 20:30:10 crc kubenswrapper[4775]: I1125 20:30:10.847069 4775 scope.go:117] "RemoveContainer" containerID="8b3d72e84cd0e1b99f400b5378959154a55dc4a65f346c9d7559b382a4e240d7" Nov 25 20:30:10 crc kubenswrapper[4775]: E1125 20:30:10.847551 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:30:11 crc kubenswrapper[4775]: I1125 20:30:11.070356 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:30:11 crc kubenswrapper[4775]: I1125 20:30:11.070438 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:30:15 crc kubenswrapper[4775]: I1125 20:30:15.847914 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:30:15 crc kubenswrapper[4775]: E1125 20:30:15.848750 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:30:25 crc kubenswrapper[4775]: I1125 20:30:25.847770 4775 scope.go:117] "RemoveContainer" containerID="8b3d72e84cd0e1b99f400b5378959154a55dc4a65f346c9d7559b382a4e240d7" Nov 25 20:30:25 crc kubenswrapper[4775]: E1125 20:30:25.849593 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:30:28 crc kubenswrapper[4775]: I1125 20:30:28.852441 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:30:28 crc kubenswrapper[4775]: E1125 20:30:28.853133 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:30:36 crc kubenswrapper[4775]: I1125 20:30:36.849303 4775 scope.go:117] "RemoveContainer" containerID="8b3d72e84cd0e1b99f400b5378959154a55dc4a65f346c9d7559b382a4e240d7" Nov 25 20:30:38 crc kubenswrapper[4775]: I1125 20:30:38.011148 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"fb583c10d1345949d3855ec5e42489612f7a90ed9abfcc45d6c83db8fddfa108"} Nov 25 20:30:38 crc kubenswrapper[4775]: I1125 20:30:38.015754 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:30:41 crc kubenswrapper[4775]: I1125 20:30:41.070168 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:30:41 crc kubenswrapper[4775]: I1125 20:30:41.070819 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:30:41 crc kubenswrapper[4775]: I1125 20:30:41.848588 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:30:41 crc kubenswrapper[4775]: E1125 20:30:41.849163 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:30:52 crc kubenswrapper[4775]: I1125 20:30:52.376099 4775 scope.go:117] "RemoveContainer" containerID="dea22e944e91e4ef838ab530ff6a1bac1550ef63f34c2d1a2d3a81b8ff9fc7c8" Nov 25 20:30:53 crc kubenswrapper[4775]: I1125 20:30:53.236555 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:30:53 crc kubenswrapper[4775]: I1125 20:30:53.377606 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:30:55 crc kubenswrapper[4775]: I1125 20:30:55.847723 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:30:55 crc kubenswrapper[4775]: E1125 20:30:55.848275 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:31:03 crc kubenswrapper[4775]: I1125 20:31:03.225957 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:31:03 crc kubenswrapper[4775]: I1125 20:31:03.268179 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:31:06 crc kubenswrapper[4775]: I1125 20:31:06.847458 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:31:06 crc kubenswrapper[4775]: E1125 20:31:06.848338 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:31:11 crc kubenswrapper[4775]: I1125 20:31:11.070256 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:31:11 crc kubenswrapper[4775]: I1125 20:31:11.070855 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:31:11 crc kubenswrapper[4775]: I1125 20:31:11.070916 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 20:31:11 crc kubenswrapper[4775]: I1125 20:31:11.071813 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec11d1f0a3f0d4a7c90c4c38fcc425cb1bce57f664d19903b36c7ccbbe002886"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:31:11 crc kubenswrapper[4775]: I1125 20:31:11.071891 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://ec11d1f0a3f0d4a7c90c4c38fcc425cb1bce57f664d19903b36c7ccbbe002886" gracePeriod=600 Nov 25 20:31:11 crc kubenswrapper[4775]: I1125 20:31:11.360870 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="ec11d1f0a3f0d4a7c90c4c38fcc425cb1bce57f664d19903b36c7ccbbe002886" exitCode=0 Nov 25 20:31:11 crc kubenswrapper[4775]: I1125 20:31:11.360929 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"ec11d1f0a3f0d4a7c90c4c38fcc425cb1bce57f664d19903b36c7ccbbe002886"} Nov 25 20:31:11 crc kubenswrapper[4775]: I1125 20:31:11.361204 4775 scope.go:117] "RemoveContainer" containerID="1d57c982cdb3af143018479b73a6ac1c19485ecb7f5d029569d3846a530e3adf" Nov 25 20:31:12 crc kubenswrapper[4775]: I1125 20:31:12.204433 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:31:12 crc kubenswrapper[4775]: I1125 20:31:12.204751 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:31:12 crc kubenswrapper[4775]: I1125 20:31:12.205312 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"fb583c10d1345949d3855ec5e42489612f7a90ed9abfcc45d6c83db8fddfa108"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:31:12 crc kubenswrapper[4775]: I1125 20:31:12.205340 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://fb583c10d1345949d3855ec5e42489612f7a90ed9abfcc45d6c83db8fddfa108" gracePeriod=30 Nov 25 20:31:12 crc kubenswrapper[4775]: I1125 20:31:12.208439 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:31:12 crc kubenswrapper[4775]: I1125 20:31:12.213050 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="Get \"https://10.217.0.245:8786/healthcheck\": EOF" Nov 25 20:31:12 crc kubenswrapper[4775]: I1125 20:31:12.370770 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685"} Nov 25 20:31:16 crc kubenswrapper[4775]: I1125 20:31:16.415533 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="fb583c10d1345949d3855ec5e42489612f7a90ed9abfcc45d6c83db8fddfa108" exitCode=0 Nov 25 20:31:16 crc kubenswrapper[4775]: I1125 20:31:16.415633 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"fb583c10d1345949d3855ec5e42489612f7a90ed9abfcc45d6c83db8fddfa108"} Nov 25 20:31:16 crc kubenswrapper[4775]: I1125 20:31:16.416151 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3"} Nov 25 20:31:16 crc kubenswrapper[4775]: I1125 20:31:16.416179 4775 scope.go:117] "RemoveContainer" containerID="8b3d72e84cd0e1b99f400b5378959154a55dc4a65f346c9d7559b382a4e240d7" Nov 25 20:31:16 crc kubenswrapper[4775]: I1125 20:31:16.416367 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:31:20 crc kubenswrapper[4775]: I1125 20:31:20.847637 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:31:20 crc kubenswrapper[4775]: E1125 20:31:20.848463 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:31:33 crc kubenswrapper[4775]: I1125 20:31:33.209302 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:31:33 crc kubenswrapper[4775]: I1125 20:31:33.366576 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:31:35 crc kubenswrapper[4775]: I1125 20:31:35.847349 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:31:35 crc kubenswrapper[4775]: E1125 20:31:35.848009 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:31:43 crc kubenswrapper[4775]: I1125 20:31:43.091861 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:31:43 crc kubenswrapper[4775]: I1125 20:31:43.121789 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:31:50 crc kubenswrapper[4775]: I1125 20:31:50.848171 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:31:50 crc kubenswrapper[4775]: E1125 20:31:50.849216 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:31:52 crc kubenswrapper[4775]: I1125 20:31:52.205441 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:31:52 crc kubenswrapper[4775]: I1125 20:31:52.205982 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:31:52 crc kubenswrapper[4775]: I1125 20:31:52.207209 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:31:52 crc kubenswrapper[4775]: I1125 20:31:52.207293 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" gracePeriod=30 Nov 25 20:31:52 crc kubenswrapper[4775]: I1125 20:31:52.207961 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:31:52 crc kubenswrapper[4775]: I1125 20:31:52.214510 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="Get \"https://10.217.0.245:8786/healthcheck\": EOF" Nov 25 20:31:55 crc kubenswrapper[4775]: E1125 20:31:55.467819 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:31:56 crc kubenswrapper[4775]: I1125 20:31:56.125285 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" exitCode=0 Nov 25 20:31:56 crc kubenswrapper[4775]: I1125 20:31:56.125395 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3"} Nov 25 20:31:56 crc kubenswrapper[4775]: I1125 20:31:56.125760 4775 scope.go:117] "RemoveContainer" containerID="fb583c10d1345949d3855ec5e42489612f7a90ed9abfcc45d6c83db8fddfa108" Nov 25 20:31:56 crc kubenswrapper[4775]: I1125 20:31:56.126968 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:31:56 crc kubenswrapper[4775]: E1125 20:31:56.127506 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:32:03 crc kubenswrapper[4775]: I1125 20:32:03.846885 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:32:03 crc kubenswrapper[4775]: E1125 20:32:03.848526 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:32:07 crc kubenswrapper[4775]: I1125 20:32:07.847722 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:32:07 crc kubenswrapper[4775]: E1125 20:32:07.848546 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:32:15 crc kubenswrapper[4775]: I1125 20:32:15.847866 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:32:15 crc kubenswrapper[4775]: E1125 20:32:15.849204 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:32:22 crc kubenswrapper[4775]: I1125 20:32:22.847065 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:32:22 crc kubenswrapper[4775]: E1125 20:32:22.847918 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:32:27 crc kubenswrapper[4775]: I1125 20:32:27.847627 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:32:28 crc kubenswrapper[4775]: I1125 20:32:28.528095 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59"} Nov 25 20:32:30 crc kubenswrapper[4775]: I1125 20:32:30.563832 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" exitCode=1 Nov 25 20:32:30 crc kubenswrapper[4775]: I1125 20:32:30.563897 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerDied","Data":"a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59"} Nov 25 20:32:30 crc kubenswrapper[4775]: I1125 20:32:30.564150 4775 scope.go:117] "RemoveContainer" containerID="deda675bfa0264ebaecbd7158ca896dbbbc08ecdab57e0843e124b400d760361" Nov 25 20:32:30 crc kubenswrapper[4775]: I1125 20:32:30.565067 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:32:30 crc kubenswrapper[4775]: E1125 20:32:30.565683 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:32:33 crc kubenswrapper[4775]: I1125 20:32:33.104449 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:32:33 crc kubenswrapper[4775]: I1125 20:32:33.105129 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:32:33 crc kubenswrapper[4775]: I1125 20:32:33.105850 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:32:33 crc kubenswrapper[4775]: E1125 20:32:33.106228 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:32:37 crc kubenswrapper[4775]: I1125 20:32:37.848348 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:32:37 crc kubenswrapper[4775]: E1125 20:32:37.849446 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:32:43 crc kubenswrapper[4775]: I1125 20:32:43.104965 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:32:43 crc kubenswrapper[4775]: I1125 20:32:43.106128 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:32:43 crc kubenswrapper[4775]: E1125 20:32:43.106460 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:32:52 crc kubenswrapper[4775]: I1125 20:32:52.507984 4775 scope.go:117] "RemoveContainer" containerID="5c7f1aac41d6f0fc84e5f378fe24a153a2e02c970b343eb23c5455a7b4059d70" Nov 25 20:32:52 crc kubenswrapper[4775]: I1125 20:32:52.545418 4775 scope.go:117] "RemoveContainer" containerID="c216d1f49a7dc1c25fb9b3cbc6eacd20805f588e4d2247a708a4e6955d2bf58a" Nov 25 20:32:52 crc kubenswrapper[4775]: I1125 20:32:52.599316 4775 scope.go:117] "RemoveContainer" containerID="10d348e4f28ed6b7fc06248c24deed879e1761e29ac5a4df15897b94cab370e1" Nov 25 20:32:52 crc kubenswrapper[4775]: I1125 20:32:52.626796 4775 scope.go:117] "RemoveContainer" containerID="67b5ff70a84262fc3799e9f96f5dd95f3f46b5973facd9de041d9fb029c5c573" Nov 25 20:32:52 crc kubenswrapper[4775]: I1125 20:32:52.847506 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:32:52 crc kubenswrapper[4775]: E1125 20:32:52.847723 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.117439 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5m9f8"] Nov 25 20:32:54 crc kubenswrapper[4775]: E1125 20:32:54.117856 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6" containerName="collect-profiles" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.117868 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6" containerName="collect-profiles" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.118067 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a9c7dc-e57a-43c3-9c86-6d0ae6b09ce6" containerName="collect-profiles" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.119388 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.146878 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5m9f8"] Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.239023 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f486e77-9eca-4bc0-9826-d9c2c47558dd-catalog-content\") pod \"certified-operators-5m9f8\" (UID: \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\") " pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.239287 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f486e77-9eca-4bc0-9826-d9c2c47558dd-utilities\") pod \"certified-operators-5m9f8\" (UID: \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\") " pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.239397 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbcqn\" (UniqueName: \"kubernetes.io/projected/8f486e77-9eca-4bc0-9826-d9c2c47558dd-kube-api-access-kbcqn\") pod \"certified-operators-5m9f8\" (UID: \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\") " pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.341270 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f486e77-9eca-4bc0-9826-d9c2c47558dd-catalog-content\") pod \"certified-operators-5m9f8\" (UID: \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\") " pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.341351 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f486e77-9eca-4bc0-9826-d9c2c47558dd-utilities\") pod \"certified-operators-5m9f8\" (UID: \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\") " pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.341402 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbcqn\" (UniqueName: \"kubernetes.io/projected/8f486e77-9eca-4bc0-9826-d9c2c47558dd-kube-api-access-kbcqn\") pod \"certified-operators-5m9f8\" (UID: \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\") " pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.342000 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f486e77-9eca-4bc0-9826-d9c2c47558dd-catalog-content\") pod \"certified-operators-5m9f8\" (UID: \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\") " pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.342164 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f486e77-9eca-4bc0-9826-d9c2c47558dd-utilities\") pod \"certified-operators-5m9f8\" (UID: \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\") " pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.363794 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbcqn\" (UniqueName: \"kubernetes.io/projected/8f486e77-9eca-4bc0-9826-d9c2c47558dd-kube-api-access-kbcqn\") pod \"certified-operators-5m9f8\" (UID: \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\") " pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.462224 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:32:54 crc kubenswrapper[4775]: I1125 20:32:54.986636 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5m9f8"] Nov 25 20:32:55 crc kubenswrapper[4775]: I1125 20:32:55.847635 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:32:55 crc kubenswrapper[4775]: E1125 20:32:55.848218 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:32:55 crc kubenswrapper[4775]: I1125 20:32:55.888759 4775 generic.go:334] "Generic (PLEG): container finished" podID="8f486e77-9eca-4bc0-9826-d9c2c47558dd" containerID="9969f25f4a6fd482cc46e35f109810bc40d4757aaecd25027a64b08db5a59bb0" exitCode=0 Nov 25 20:32:55 crc kubenswrapper[4775]: I1125 20:32:55.888802 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m9f8" event={"ID":"8f486e77-9eca-4bc0-9826-d9c2c47558dd","Type":"ContainerDied","Data":"9969f25f4a6fd482cc46e35f109810bc40d4757aaecd25027a64b08db5a59bb0"} Nov 25 20:32:55 crc kubenswrapper[4775]: I1125 20:32:55.888829 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m9f8" event={"ID":"8f486e77-9eca-4bc0-9826-d9c2c47558dd","Type":"ContainerStarted","Data":"a1e9dd28f9306dea4d811b0263c3d9406aa1ad8528365c641e11e2d55d2d37a5"} Nov 25 20:32:56 crc kubenswrapper[4775]: I1125 20:32:56.902946 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m9f8" event={"ID":"8f486e77-9eca-4bc0-9826-d9c2c47558dd","Type":"ContainerStarted","Data":"70f237ec44956b3671899aa51dda3e4f199252ad860ee213d597e543e20639d7"} Nov 25 20:32:57 crc kubenswrapper[4775]: I1125 20:32:57.919361 4775 generic.go:334] "Generic (PLEG): container finished" podID="8f486e77-9eca-4bc0-9826-d9c2c47558dd" containerID="70f237ec44956b3671899aa51dda3e4f199252ad860ee213d597e543e20639d7" exitCode=0 Nov 25 20:32:57 crc kubenswrapper[4775]: I1125 20:32:57.919496 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m9f8" event={"ID":"8f486e77-9eca-4bc0-9826-d9c2c47558dd","Type":"ContainerDied","Data":"70f237ec44956b3671899aa51dda3e4f199252ad860ee213d597e543e20639d7"} Nov 25 20:32:58 crc kubenswrapper[4775]: I1125 20:32:58.929961 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m9f8" event={"ID":"8f486e77-9eca-4bc0-9826-d9c2c47558dd","Type":"ContainerStarted","Data":"44bef6b04af4bfe880d6a553d9180ca142412476fe97dde96efb655d6c9040f5"} Nov 25 20:32:58 crc kubenswrapper[4775]: I1125 20:32:58.954497 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5m9f8" podStartSLOduration=2.516984131 podStartE2EDuration="4.954475092s" podCreationTimestamp="2025-11-25 20:32:54 +0000 UTC" firstStartedPulling="2025-11-25 20:32:55.891898792 +0000 UTC m=+3557.808261208" lastFinishedPulling="2025-11-25 20:32:58.329389803 +0000 UTC m=+3560.245752169" observedRunningTime="2025-11-25 20:32:58.946975798 +0000 UTC m=+3560.863338164" watchObservedRunningTime="2025-11-25 20:32:58.954475092 +0000 UTC m=+3560.870837458" Nov 25 20:33:04 crc kubenswrapper[4775]: I1125 20:33:04.463516 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:33:04 crc kubenswrapper[4775]: I1125 20:33:04.464124 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:33:04 crc kubenswrapper[4775]: I1125 20:33:04.520781 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:33:04 crc kubenswrapper[4775]: I1125 20:33:04.847964 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:33:04 crc kubenswrapper[4775]: E1125 20:33:04.848313 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:33:05 crc kubenswrapper[4775]: I1125 20:33:05.036675 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:33:05 crc kubenswrapper[4775]: I1125 20:33:05.089924 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5m9f8"] Nov 25 20:33:07 crc kubenswrapper[4775]: I1125 20:33:07.005039 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5m9f8" podUID="8f486e77-9eca-4bc0-9826-d9c2c47558dd" containerName="registry-server" containerID="cri-o://44bef6b04af4bfe880d6a553d9180ca142412476fe97dde96efb655d6c9040f5" gracePeriod=2 Nov 25 20:33:07 crc kubenswrapper[4775]: I1125 20:33:07.506390 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:33:07 crc kubenswrapper[4775]: I1125 20:33:07.642445 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbcqn\" (UniqueName: \"kubernetes.io/projected/8f486e77-9eca-4bc0-9826-d9c2c47558dd-kube-api-access-kbcqn\") pod \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\" (UID: \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\") " Nov 25 20:33:07 crc kubenswrapper[4775]: I1125 20:33:07.642762 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f486e77-9eca-4bc0-9826-d9c2c47558dd-catalog-content\") pod \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\" (UID: \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\") " Nov 25 20:33:07 crc kubenswrapper[4775]: I1125 20:33:07.642968 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f486e77-9eca-4bc0-9826-d9c2c47558dd-utilities\") pod \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\" (UID: \"8f486e77-9eca-4bc0-9826-d9c2c47558dd\") " Nov 25 20:33:07 crc kubenswrapper[4775]: I1125 20:33:07.643713 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f486e77-9eca-4bc0-9826-d9c2c47558dd-utilities" (OuterVolumeSpecName: "utilities") pod "8f486e77-9eca-4bc0-9826-d9c2c47558dd" (UID: "8f486e77-9eca-4bc0-9826-d9c2c47558dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:33:07 crc kubenswrapper[4775]: I1125 20:33:07.650107 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f486e77-9eca-4bc0-9826-d9c2c47558dd-kube-api-access-kbcqn" (OuterVolumeSpecName: "kube-api-access-kbcqn") pod "8f486e77-9eca-4bc0-9826-d9c2c47558dd" (UID: "8f486e77-9eca-4bc0-9826-d9c2c47558dd"). InnerVolumeSpecName "kube-api-access-kbcqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:33:07 crc kubenswrapper[4775]: I1125 20:33:07.745527 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbcqn\" (UniqueName: \"kubernetes.io/projected/8f486e77-9eca-4bc0-9826-d9c2c47558dd-kube-api-access-kbcqn\") on node \"crc\" DevicePath \"\"" Nov 25 20:33:07 crc kubenswrapper[4775]: I1125 20:33:07.745922 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f486e77-9eca-4bc0-9826-d9c2c47558dd-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:33:07 crc kubenswrapper[4775]: I1125 20:33:07.778203 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f486e77-9eca-4bc0-9826-d9c2c47558dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f486e77-9eca-4bc0-9826-d9c2c47558dd" (UID: "8f486e77-9eca-4bc0-9826-d9c2c47558dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:33:07 crc kubenswrapper[4775]: I1125 20:33:07.848216 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f486e77-9eca-4bc0-9826-d9c2c47558dd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.015610 4775 generic.go:334] "Generic (PLEG): container finished" podID="8f486e77-9eca-4bc0-9826-d9c2c47558dd" containerID="44bef6b04af4bfe880d6a553d9180ca142412476fe97dde96efb655d6c9040f5" exitCode=0 Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.015676 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m9f8" event={"ID":"8f486e77-9eca-4bc0-9826-d9c2c47558dd","Type":"ContainerDied","Data":"44bef6b04af4bfe880d6a553d9180ca142412476fe97dde96efb655d6c9040f5"} Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.015708 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m9f8" event={"ID":"8f486e77-9eca-4bc0-9826-d9c2c47558dd","Type":"ContainerDied","Data":"a1e9dd28f9306dea4d811b0263c3d9406aa1ad8528365c641e11e2d55d2d37a5"} Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.015711 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m9f8" Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.015729 4775 scope.go:117] "RemoveContainer" containerID="44bef6b04af4bfe880d6a553d9180ca142412476fe97dde96efb655d6c9040f5" Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.047080 4775 scope.go:117] "RemoveContainer" containerID="70f237ec44956b3671899aa51dda3e4f199252ad860ee213d597e543e20639d7" Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.059876 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5m9f8"] Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.071081 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5m9f8"] Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.082430 4775 scope.go:117] "RemoveContainer" containerID="9969f25f4a6fd482cc46e35f109810bc40d4757aaecd25027a64b08db5a59bb0" Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.121752 4775 scope.go:117] "RemoveContainer" containerID="44bef6b04af4bfe880d6a553d9180ca142412476fe97dde96efb655d6c9040f5" Nov 25 20:33:08 crc kubenswrapper[4775]: E1125 20:33:08.122456 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44bef6b04af4bfe880d6a553d9180ca142412476fe97dde96efb655d6c9040f5\": container with ID starting with 44bef6b04af4bfe880d6a553d9180ca142412476fe97dde96efb655d6c9040f5 not found: ID does not exist" containerID="44bef6b04af4bfe880d6a553d9180ca142412476fe97dde96efb655d6c9040f5" Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.122539 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44bef6b04af4bfe880d6a553d9180ca142412476fe97dde96efb655d6c9040f5"} err="failed to get container status \"44bef6b04af4bfe880d6a553d9180ca142412476fe97dde96efb655d6c9040f5\": rpc error: code = NotFound desc = could not find container \"44bef6b04af4bfe880d6a553d9180ca142412476fe97dde96efb655d6c9040f5\": container with ID starting with 44bef6b04af4bfe880d6a553d9180ca142412476fe97dde96efb655d6c9040f5 not found: ID does not exist" Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.122572 4775 scope.go:117] "RemoveContainer" containerID="70f237ec44956b3671899aa51dda3e4f199252ad860ee213d597e543e20639d7" Nov 25 20:33:08 crc kubenswrapper[4775]: E1125 20:33:08.123010 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70f237ec44956b3671899aa51dda3e4f199252ad860ee213d597e543e20639d7\": container with ID starting with 70f237ec44956b3671899aa51dda3e4f199252ad860ee213d597e543e20639d7 not found: ID does not exist" containerID="70f237ec44956b3671899aa51dda3e4f199252ad860ee213d597e543e20639d7" Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.123148 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70f237ec44956b3671899aa51dda3e4f199252ad860ee213d597e543e20639d7"} err="failed to get container status \"70f237ec44956b3671899aa51dda3e4f199252ad860ee213d597e543e20639d7\": rpc error: code = NotFound desc = could not find container \"70f237ec44956b3671899aa51dda3e4f199252ad860ee213d597e543e20639d7\": container with ID starting with 70f237ec44956b3671899aa51dda3e4f199252ad860ee213d597e543e20639d7 not found: ID does not exist" Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.123257 4775 scope.go:117] "RemoveContainer" containerID="9969f25f4a6fd482cc46e35f109810bc40d4757aaecd25027a64b08db5a59bb0" Nov 25 20:33:08 crc kubenswrapper[4775]: E1125 20:33:08.123717 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9969f25f4a6fd482cc46e35f109810bc40d4757aaecd25027a64b08db5a59bb0\": container with ID starting with 9969f25f4a6fd482cc46e35f109810bc40d4757aaecd25027a64b08db5a59bb0 not found: ID does not exist" containerID="9969f25f4a6fd482cc46e35f109810bc40d4757aaecd25027a64b08db5a59bb0" Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.123739 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9969f25f4a6fd482cc46e35f109810bc40d4757aaecd25027a64b08db5a59bb0"} err="failed to get container status \"9969f25f4a6fd482cc46e35f109810bc40d4757aaecd25027a64b08db5a59bb0\": rpc error: code = NotFound desc = could not find container \"9969f25f4a6fd482cc46e35f109810bc40d4757aaecd25027a64b08db5a59bb0\": container with ID starting with 9969f25f4a6fd482cc46e35f109810bc40d4757aaecd25027a64b08db5a59bb0 not found: ID does not exist" Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.858796 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:33:08 crc kubenswrapper[4775]: E1125 20:33:08.859459 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:33:08 crc kubenswrapper[4775]: I1125 20:33:08.860794 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f486e77-9eca-4bc0-9826-d9c2c47558dd" path="/var/lib/kubelet/pods/8f486e77-9eca-4bc0-9826-d9c2c47558dd/volumes" Nov 25 20:33:11 crc kubenswrapper[4775]: I1125 20:33:11.070326 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:33:11 crc kubenswrapper[4775]: I1125 20:33:11.070707 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:33:19 crc kubenswrapper[4775]: I1125 20:33:19.847236 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:33:19 crc kubenswrapper[4775]: I1125 20:33:19.848594 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:33:19 crc kubenswrapper[4775]: E1125 20:33:19.848607 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:33:19 crc kubenswrapper[4775]: E1125 20:33:19.848925 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:33:33 crc kubenswrapper[4775]: I1125 20:33:33.847397 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:33:33 crc kubenswrapper[4775]: E1125 20:33:33.848734 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:33:34 crc kubenswrapper[4775]: I1125 20:33:34.849119 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:33:34 crc kubenswrapper[4775]: E1125 20:33:34.849985 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:33:41 crc kubenswrapper[4775]: I1125 20:33:41.070142 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:33:41 crc kubenswrapper[4775]: I1125 20:33:41.070630 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:33:44 crc kubenswrapper[4775]: I1125 20:33:44.848196 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:33:44 crc kubenswrapper[4775]: E1125 20:33:44.849059 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:33:49 crc kubenswrapper[4775]: I1125 20:33:49.847161 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:33:49 crc kubenswrapper[4775]: E1125 20:33:49.848088 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:33:59 crc kubenswrapper[4775]: I1125 20:33:59.848136 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:33:59 crc kubenswrapper[4775]: E1125 20:33:59.849353 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:34:02 crc kubenswrapper[4775]: I1125 20:34:02.849516 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:34:02 crc kubenswrapper[4775]: E1125 20:34:02.850844 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:34:11 crc kubenswrapper[4775]: I1125 20:34:11.071103 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:34:11 crc kubenswrapper[4775]: I1125 20:34:11.071748 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:34:11 crc kubenswrapper[4775]: I1125 20:34:11.071798 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 20:34:11 crc kubenswrapper[4775]: I1125 20:34:11.072668 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:34:11 crc kubenswrapper[4775]: I1125 20:34:11.072739 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" gracePeriod=600 Nov 25 20:34:11 crc kubenswrapper[4775]: E1125 20:34:11.223191 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:34:11 crc kubenswrapper[4775]: I1125 20:34:11.749675 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685"} Nov 25 20:34:11 crc kubenswrapper[4775]: I1125 20:34:11.749711 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" exitCode=0 Nov 25 20:34:11 crc kubenswrapper[4775]: I1125 20:34:11.749732 4775 scope.go:117] "RemoveContainer" containerID="ec11d1f0a3f0d4a7c90c4c38fcc425cb1bce57f664d19903b36c7ccbbe002886" Nov 25 20:34:11 crc kubenswrapper[4775]: I1125 20:34:11.750605 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:34:11 crc kubenswrapper[4775]: E1125 20:34:11.751089 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:34:13 crc kubenswrapper[4775]: I1125 20:34:13.846711 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:34:13 crc kubenswrapper[4775]: E1125 20:34:13.847211 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:34:15 crc kubenswrapper[4775]: I1125 20:34:15.848162 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:34:15 crc kubenswrapper[4775]: E1125 20:34:15.849413 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:34:22 crc kubenswrapper[4775]: I1125 20:34:22.848242 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:34:22 crc kubenswrapper[4775]: E1125 20:34:22.849448 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:34:25 crc kubenswrapper[4775]: I1125 20:34:25.847531 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:34:25 crc kubenswrapper[4775]: E1125 20:34:25.848366 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:34:29 crc kubenswrapper[4775]: I1125 20:34:29.847667 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:34:29 crc kubenswrapper[4775]: E1125 20:34:29.848500 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:34:37 crc kubenswrapper[4775]: I1125 20:34:37.847807 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:34:37 crc kubenswrapper[4775]: E1125 20:34:37.849075 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:34:40 crc kubenswrapper[4775]: I1125 20:34:40.847296 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:34:40 crc kubenswrapper[4775]: E1125 20:34:40.848547 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:34:43 crc kubenswrapper[4775]: I1125 20:34:43.848131 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:34:45 crc kubenswrapper[4775]: I1125 20:34:45.200710 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa"} Nov 25 20:34:45 crc kubenswrapper[4775]: I1125 20:34:45.201496 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:34:51 crc kubenswrapper[4775]: I1125 20:34:51.846761 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:34:51 crc kubenswrapper[4775]: E1125 20:34:51.847544 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:34:52 crc kubenswrapper[4775]: I1125 20:34:52.847683 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:34:52 crc kubenswrapper[4775]: E1125 20:34:52.848040 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:35:03 crc kubenswrapper[4775]: I1125 20:35:03.257697 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:35:03 crc kubenswrapper[4775]: I1125 20:35:03.280315 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:35:06 crc kubenswrapper[4775]: I1125 20:35:06.847710 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:35:06 crc kubenswrapper[4775]: E1125 20:35:06.849737 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:35:07 crc kubenswrapper[4775]: I1125 20:35:07.848018 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:35:07 crc kubenswrapper[4775]: E1125 20:35:07.849105 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:35:13 crc kubenswrapper[4775]: I1125 20:35:13.146780 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:35:13 crc kubenswrapper[4775]: I1125 20:35:13.167525 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:35:19 crc kubenswrapper[4775]: I1125 20:35:19.847821 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:35:19 crc kubenswrapper[4775]: E1125 20:35:19.851507 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:35:21 crc kubenswrapper[4775]: I1125 20:35:21.847016 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:35:21 crc kubenswrapper[4775]: E1125 20:35:21.847893 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:35:22 crc kubenswrapper[4775]: I1125 20:35:22.204416 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:35:22 crc kubenswrapper[4775]: I1125 20:35:22.205692 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:35:22 crc kubenswrapper[4775]: I1125 20:35:22.205745 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:35:22 crc kubenswrapper[4775]: I1125 20:35:22.206700 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:35:22 crc kubenswrapper[4775]: I1125 20:35:22.206746 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" gracePeriod=30 Nov 25 20:35:22 crc kubenswrapper[4775]: I1125 20:35:22.213976 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="Get \"https://10.217.0.245:8786/healthcheck\": EOF" Nov 25 20:35:25 crc kubenswrapper[4775]: E1125 20:35:25.462726 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:35:25 crc kubenswrapper[4775]: I1125 20:35:25.645422 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" exitCode=0 Nov 25 20:35:25 crc kubenswrapper[4775]: I1125 20:35:25.645466 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa"} Nov 25 20:35:25 crc kubenswrapper[4775]: I1125 20:35:25.645498 4775 scope.go:117] "RemoveContainer" containerID="4ac036471d83e67787b0fb2601900e8bbc33d2c09f3d293fd2ad127996b70ff3" Nov 25 20:35:25 crc kubenswrapper[4775]: I1125 20:35:25.646460 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:35:25 crc kubenswrapper[4775]: E1125 20:35:25.646906 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:35:33 crc kubenswrapper[4775]: I1125 20:35:33.847715 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:35:33 crc kubenswrapper[4775]: E1125 20:35:33.850153 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:35:35 crc kubenswrapper[4775]: I1125 20:35:35.847240 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:35:35 crc kubenswrapper[4775]: E1125 20:35:35.847890 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:35:37 crc kubenswrapper[4775]: I1125 20:35:37.847612 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:35:37 crc kubenswrapper[4775]: E1125 20:35:37.847976 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:35:44 crc kubenswrapper[4775]: I1125 20:35:44.847300 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:35:44 crc kubenswrapper[4775]: E1125 20:35:44.848475 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:35:49 crc kubenswrapper[4775]: I1125 20:35:49.884283 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:35:49 crc kubenswrapper[4775]: E1125 20:35:49.885425 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:35:52 crc kubenswrapper[4775]: I1125 20:35:52.062031 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-lxq6k"] Nov 25 20:35:52 crc kubenswrapper[4775]: I1125 20:35:52.085978 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-4d88-account-create-update-v4ctp"] Nov 25 20:35:52 crc kubenswrapper[4775]: I1125 20:35:52.092469 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-lxq6k"] Nov 25 20:35:52 crc kubenswrapper[4775]: I1125 20:35:52.102947 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-4d88-account-create-update-v4ctp"] Nov 25 20:35:52 crc kubenswrapper[4775]: I1125 20:35:52.848818 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:35:52 crc kubenswrapper[4775]: E1125 20:35:52.849240 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:35:52 crc kubenswrapper[4775]: I1125 20:35:52.868171 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72b57b64-f5b1-4f21-aafc-54a9ca0c1faa" path="/var/lib/kubelet/pods/72b57b64-f5b1-4f21-aafc-54a9ca0c1faa/volumes" Nov 25 20:35:52 crc kubenswrapper[4775]: I1125 20:35:52.869542 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb30cfa2-754e-47a1-8dad-08d8ebe919a2" path="/var/lib/kubelet/pods/cb30cfa2-754e-47a1-8dad-08d8ebe919a2/volumes" Nov 25 20:35:56 crc kubenswrapper[4775]: I1125 20:35:56.847774 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:35:56 crc kubenswrapper[4775]: E1125 20:35:56.848535 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:36:03 crc kubenswrapper[4775]: I1125 20:36:03.847371 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:36:03 crc kubenswrapper[4775]: E1125 20:36:03.848132 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:36:05 crc kubenswrapper[4775]: I1125 20:36:05.847383 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:36:05 crc kubenswrapper[4775]: E1125 20:36:05.848299 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:36:07 crc kubenswrapper[4775]: I1125 20:36:07.847060 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:36:07 crc kubenswrapper[4775]: E1125 20:36:07.847663 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:36:17 crc kubenswrapper[4775]: I1125 20:36:17.847799 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:36:17 crc kubenswrapper[4775]: E1125 20:36:17.848665 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:36:19 crc kubenswrapper[4775]: I1125 20:36:19.847529 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:36:19 crc kubenswrapper[4775]: E1125 20:36:19.848189 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:36:20 crc kubenswrapper[4775]: I1125 20:36:20.847712 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:36:20 crc kubenswrapper[4775]: E1125 20:36:20.848752 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:36:22 crc kubenswrapper[4775]: I1125 20:36:22.055802 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-mx4vt"] Nov 25 20:36:22 crc kubenswrapper[4775]: I1125 20:36:22.064879 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-mx4vt"] Nov 25 20:36:22 crc kubenswrapper[4775]: I1125 20:36:22.867321 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a582aec-b4ff-44a8-b217-9079392a5c8f" path="/var/lib/kubelet/pods/6a582aec-b4ff-44a8-b217-9079392a5c8f/volumes" Nov 25 20:36:30 crc kubenswrapper[4775]: I1125 20:36:30.847928 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:36:30 crc kubenswrapper[4775]: E1125 20:36:30.849145 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:36:32 crc kubenswrapper[4775]: I1125 20:36:32.846880 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:36:32 crc kubenswrapper[4775]: E1125 20:36:32.847433 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:36:32 crc kubenswrapper[4775]: I1125 20:36:32.848533 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:36:32 crc kubenswrapper[4775]: E1125 20:36:32.849057 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:36:41 crc kubenswrapper[4775]: I1125 20:36:41.846952 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:36:41 crc kubenswrapper[4775]: E1125 20:36:41.848178 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:36:43 crc kubenswrapper[4775]: I1125 20:36:43.848034 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:36:43 crc kubenswrapper[4775]: E1125 20:36:43.849081 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:36:46 crc kubenswrapper[4775]: I1125 20:36:46.847888 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:36:46 crc kubenswrapper[4775]: E1125 20:36:46.849089 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:36:52 crc kubenswrapper[4775]: I1125 20:36:52.796822 4775 scope.go:117] "RemoveContainer" containerID="06f7fd2563e337203ec45d89054266bda6450d4f9c700022be857b2d6a2eb709" Nov 25 20:36:53 crc kubenswrapper[4775]: I1125 20:36:53.633593 4775 scope.go:117] "RemoveContainer" containerID="9abc99296482f8443e2ee099968bd77645ac719791ac75c65d42030624fc5b97" Nov 25 20:36:53 crc kubenswrapper[4775]: I1125 20:36:53.689059 4775 scope.go:117] "RemoveContainer" containerID="5fa907e2c71a32aa614ed0f2ea0c26cf779009ca44d7a7b7c46479b41b8f7787" Nov 25 20:36:54 crc kubenswrapper[4775]: I1125 20:36:54.847402 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:36:54 crc kubenswrapper[4775]: I1125 20:36:54.847937 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:36:54 crc kubenswrapper[4775]: E1125 20:36:54.848244 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:36:54 crc kubenswrapper[4775]: E1125 20:36:54.848500 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:37:00 crc kubenswrapper[4775]: I1125 20:37:00.848432 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:37:00 crc kubenswrapper[4775]: E1125 20:37:00.849574 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:37:07 crc kubenswrapper[4775]: I1125 20:37:07.849702 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:37:07 crc kubenswrapper[4775]: E1125 20:37:07.851076 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:37:09 crc kubenswrapper[4775]: I1125 20:37:09.847136 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:37:09 crc kubenswrapper[4775]: E1125 20:37:09.847800 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:37:12 crc kubenswrapper[4775]: I1125 20:37:12.848763 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:37:12 crc kubenswrapper[4775]: E1125 20:37:12.849572 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:37:22 crc kubenswrapper[4775]: I1125 20:37:22.851828 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:37:22 crc kubenswrapper[4775]: E1125 20:37:22.855778 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:37:22 crc kubenswrapper[4775]: I1125 20:37:22.859061 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:37:22 crc kubenswrapper[4775]: E1125 20:37:22.866320 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:37:23 crc kubenswrapper[4775]: I1125 20:37:23.848044 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:37:23 crc kubenswrapper[4775]: E1125 20:37:23.848513 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:37:34 crc kubenswrapper[4775]: I1125 20:37:34.847581 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:37:36 crc kubenswrapper[4775]: I1125 20:37:36.291318 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e"} Nov 25 20:37:36 crc kubenswrapper[4775]: I1125 20:37:36.848281 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:37:36 crc kubenswrapper[4775]: E1125 20:37:36.848707 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:37:37 crc kubenswrapper[4775]: I1125 20:37:37.847432 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:37:37 crc kubenswrapper[4775]: E1125 20:37:37.848084 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:37:38 crc kubenswrapper[4775]: I1125 20:37:38.314994 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" exitCode=1 Nov 25 20:37:38 crc kubenswrapper[4775]: I1125 20:37:38.315053 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerDied","Data":"2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e"} Nov 25 20:37:38 crc kubenswrapper[4775]: I1125 20:37:38.315104 4775 scope.go:117] "RemoveContainer" containerID="a10ca0225aabcb5808f4833371396aad4952aff0431dbb0ca1cab9e777123c59" Nov 25 20:37:38 crc kubenswrapper[4775]: I1125 20:37:38.316124 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:37:38 crc kubenswrapper[4775]: E1125 20:37:38.316678 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:37:43 crc kubenswrapper[4775]: I1125 20:37:43.104592 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:37:43 crc kubenswrapper[4775]: I1125 20:37:43.107002 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:37:43 crc kubenswrapper[4775]: I1125 20:37:43.107050 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:37:43 crc kubenswrapper[4775]: I1125 20:37:43.107920 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:37:43 crc kubenswrapper[4775]: E1125 20:37:43.108476 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:37:51 crc kubenswrapper[4775]: I1125 20:37:51.848200 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:37:51 crc kubenswrapper[4775]: E1125 20:37:51.849118 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:37:52 crc kubenswrapper[4775]: I1125 20:37:52.847310 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:37:52 crc kubenswrapper[4775]: E1125 20:37:52.847802 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:37:54 crc kubenswrapper[4775]: I1125 20:37:54.848400 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:37:54 crc kubenswrapper[4775]: E1125 20:37:54.849317 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:38:05 crc kubenswrapper[4775]: I1125 20:38:05.847640 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:38:05 crc kubenswrapper[4775]: E1125 20:38:05.848430 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:38:06 crc kubenswrapper[4775]: I1125 20:38:06.847811 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:38:06 crc kubenswrapper[4775]: E1125 20:38:06.848835 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:38:07 crc kubenswrapper[4775]: I1125 20:38:07.847303 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:38:07 crc kubenswrapper[4775]: E1125 20:38:07.847929 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:38:16 crc kubenswrapper[4775]: I1125 20:38:16.847392 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:38:16 crc kubenswrapper[4775]: E1125 20:38:16.848158 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:38:18 crc kubenswrapper[4775]: I1125 20:38:18.856313 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:38:18 crc kubenswrapper[4775]: E1125 20:38:18.857411 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:38:19 crc kubenswrapper[4775]: I1125 20:38:19.848026 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:38:19 crc kubenswrapper[4775]: E1125 20:38:19.848639 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:38:30 crc kubenswrapper[4775]: I1125 20:38:30.847554 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:38:30 crc kubenswrapper[4775]: I1125 20:38:30.848088 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:38:30 crc kubenswrapper[4775]: E1125 20:38:30.848234 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:38:30 crc kubenswrapper[4775]: E1125 20:38:30.848502 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:38:32 crc kubenswrapper[4775]: I1125 20:38:32.847014 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:38:32 crc kubenswrapper[4775]: E1125 20:38:32.847387 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.774552 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cmp4x"] Nov 25 20:38:39 crc kubenswrapper[4775]: E1125 20:38:39.775638 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f486e77-9eca-4bc0-9826-d9c2c47558dd" containerName="extract-content" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.775670 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f486e77-9eca-4bc0-9826-d9c2c47558dd" containerName="extract-content" Nov 25 20:38:39 crc kubenswrapper[4775]: E1125 20:38:39.775685 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f486e77-9eca-4bc0-9826-d9c2c47558dd" containerName="registry-server" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.775692 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f486e77-9eca-4bc0-9826-d9c2c47558dd" containerName="registry-server" Nov 25 20:38:39 crc kubenswrapper[4775]: E1125 20:38:39.775710 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f486e77-9eca-4bc0-9826-d9c2c47558dd" containerName="extract-utilities" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.775718 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f486e77-9eca-4bc0-9826-d9c2c47558dd" containerName="extract-utilities" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.775961 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f486e77-9eca-4bc0-9826-d9c2c47558dd" containerName="registry-server" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.777599 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.795040 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmp4x"] Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.862013 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb5772e-c730-4293-b240-892565eb0c35-catalog-content\") pod \"redhat-marketplace-cmp4x\" (UID: \"7cb5772e-c730-4293-b240-892565eb0c35\") " pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.862105 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgjh7\" (UniqueName: \"kubernetes.io/projected/7cb5772e-c730-4293-b240-892565eb0c35-kube-api-access-xgjh7\") pod \"redhat-marketplace-cmp4x\" (UID: \"7cb5772e-c730-4293-b240-892565eb0c35\") " pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.862174 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb5772e-c730-4293-b240-892565eb0c35-utilities\") pod \"redhat-marketplace-cmp4x\" (UID: \"7cb5772e-c730-4293-b240-892565eb0c35\") " pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.963380 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb5772e-c730-4293-b240-892565eb0c35-utilities\") pod \"redhat-marketplace-cmp4x\" (UID: \"7cb5772e-c730-4293-b240-892565eb0c35\") " pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.963495 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb5772e-c730-4293-b240-892565eb0c35-catalog-content\") pod \"redhat-marketplace-cmp4x\" (UID: \"7cb5772e-c730-4293-b240-892565eb0c35\") " pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.963546 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgjh7\" (UniqueName: \"kubernetes.io/projected/7cb5772e-c730-4293-b240-892565eb0c35-kube-api-access-xgjh7\") pod \"redhat-marketplace-cmp4x\" (UID: \"7cb5772e-c730-4293-b240-892565eb0c35\") " pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.963979 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb5772e-c730-4293-b240-892565eb0c35-utilities\") pod \"redhat-marketplace-cmp4x\" (UID: \"7cb5772e-c730-4293-b240-892565eb0c35\") " pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.964043 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb5772e-c730-4293-b240-892565eb0c35-catalog-content\") pod \"redhat-marketplace-cmp4x\" (UID: \"7cb5772e-c730-4293-b240-892565eb0c35\") " pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:39 crc kubenswrapper[4775]: I1125 20:38:39.997617 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgjh7\" (UniqueName: \"kubernetes.io/projected/7cb5772e-c730-4293-b240-892565eb0c35-kube-api-access-xgjh7\") pod \"redhat-marketplace-cmp4x\" (UID: \"7cb5772e-c730-4293-b240-892565eb0c35\") " pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:40 crc kubenswrapper[4775]: I1125 20:38:40.105845 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:40 crc kubenswrapper[4775]: I1125 20:38:40.565862 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmp4x"] Nov 25 20:38:41 crc kubenswrapper[4775]: I1125 20:38:41.030626 4775 generic.go:334] "Generic (PLEG): container finished" podID="7cb5772e-c730-4293-b240-892565eb0c35" containerID="3ae02c4b75fd46fd69c58538adcf93c8ddf5e52bd8f5a037c89106e2260d23b8" exitCode=0 Nov 25 20:38:41 crc kubenswrapper[4775]: I1125 20:38:41.030732 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmp4x" event={"ID":"7cb5772e-c730-4293-b240-892565eb0c35","Type":"ContainerDied","Data":"3ae02c4b75fd46fd69c58538adcf93c8ddf5e52bd8f5a037c89106e2260d23b8"} Nov 25 20:38:41 crc kubenswrapper[4775]: I1125 20:38:41.030776 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmp4x" event={"ID":"7cb5772e-c730-4293-b240-892565eb0c35","Type":"ContainerStarted","Data":"51dbca9394afdfaed8f1ca4939211c7fa2fcb1f5eef86eedb5a863ad6ade04d0"} Nov 25 20:38:41 crc kubenswrapper[4775]: I1125 20:38:41.035028 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:38:42 crc kubenswrapper[4775]: I1125 20:38:42.849774 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:38:42 crc kubenswrapper[4775]: E1125 20:38:42.850542 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:38:42 crc kubenswrapper[4775]: I1125 20:38:42.850708 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:38:42 crc kubenswrapper[4775]: E1125 20:38:42.851089 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:38:43 crc kubenswrapper[4775]: I1125 20:38:43.056342 4775 generic.go:334] "Generic (PLEG): container finished" podID="7cb5772e-c730-4293-b240-892565eb0c35" containerID="0f250b1153b0389aff7c00298499757dc71a799e64b82d8b4c0439421c836fee" exitCode=0 Nov 25 20:38:43 crc kubenswrapper[4775]: I1125 20:38:43.056406 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmp4x" event={"ID":"7cb5772e-c730-4293-b240-892565eb0c35","Type":"ContainerDied","Data":"0f250b1153b0389aff7c00298499757dc71a799e64b82d8b4c0439421c836fee"} Nov 25 20:38:44 crc kubenswrapper[4775]: I1125 20:38:44.069440 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmp4x" event={"ID":"7cb5772e-c730-4293-b240-892565eb0c35","Type":"ContainerStarted","Data":"59931d9f2af34302f788e84e240da9ce336d95d3bd08e6d2fb724fe463c48fc7"} Nov 25 20:38:44 crc kubenswrapper[4775]: I1125 20:38:44.091433 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cmp4x" podStartSLOduration=2.676847349 podStartE2EDuration="5.091413383s" podCreationTimestamp="2025-11-25 20:38:39 +0000 UTC" firstStartedPulling="2025-11-25 20:38:41.034694865 +0000 UTC m=+3902.951057241" lastFinishedPulling="2025-11-25 20:38:43.449260899 +0000 UTC m=+3905.365623275" observedRunningTime="2025-11-25 20:38:44.088172646 +0000 UTC m=+3906.004535012" watchObservedRunningTime="2025-11-25 20:38:44.091413383 +0000 UTC m=+3906.007775749" Nov 25 20:38:44 crc kubenswrapper[4775]: I1125 20:38:44.846865 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:38:44 crc kubenswrapper[4775]: E1125 20:38:44.847327 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:38:50 crc kubenswrapper[4775]: I1125 20:38:50.107625 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:50 crc kubenswrapper[4775]: I1125 20:38:50.108132 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:50 crc kubenswrapper[4775]: I1125 20:38:50.170858 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:50 crc kubenswrapper[4775]: I1125 20:38:50.242936 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:50 crc kubenswrapper[4775]: I1125 20:38:50.422109 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmp4x"] Nov 25 20:38:52 crc kubenswrapper[4775]: I1125 20:38:52.153101 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cmp4x" podUID="7cb5772e-c730-4293-b240-892565eb0c35" containerName="registry-server" containerID="cri-o://59931d9f2af34302f788e84e240da9ce336d95d3bd08e6d2fb724fe463c48fc7" gracePeriod=2 Nov 25 20:38:52 crc kubenswrapper[4775]: I1125 20:38:52.741474 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:52 crc kubenswrapper[4775]: I1125 20:38:52.837602 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgjh7\" (UniqueName: \"kubernetes.io/projected/7cb5772e-c730-4293-b240-892565eb0c35-kube-api-access-xgjh7\") pod \"7cb5772e-c730-4293-b240-892565eb0c35\" (UID: \"7cb5772e-c730-4293-b240-892565eb0c35\") " Nov 25 20:38:52 crc kubenswrapper[4775]: I1125 20:38:52.837745 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb5772e-c730-4293-b240-892565eb0c35-catalog-content\") pod \"7cb5772e-c730-4293-b240-892565eb0c35\" (UID: \"7cb5772e-c730-4293-b240-892565eb0c35\") " Nov 25 20:38:52 crc kubenswrapper[4775]: I1125 20:38:52.837805 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb5772e-c730-4293-b240-892565eb0c35-utilities\") pod \"7cb5772e-c730-4293-b240-892565eb0c35\" (UID: \"7cb5772e-c730-4293-b240-892565eb0c35\") " Nov 25 20:38:52 crc kubenswrapper[4775]: I1125 20:38:52.838921 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cb5772e-c730-4293-b240-892565eb0c35-utilities" (OuterVolumeSpecName: "utilities") pod "7cb5772e-c730-4293-b240-892565eb0c35" (UID: "7cb5772e-c730-4293-b240-892565eb0c35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:38:52 crc kubenswrapper[4775]: I1125 20:38:52.846030 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb5772e-c730-4293-b240-892565eb0c35-kube-api-access-xgjh7" (OuterVolumeSpecName: "kube-api-access-xgjh7") pod "7cb5772e-c730-4293-b240-892565eb0c35" (UID: "7cb5772e-c730-4293-b240-892565eb0c35"). InnerVolumeSpecName "kube-api-access-xgjh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:38:52 crc kubenswrapper[4775]: I1125 20:38:52.871950 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cb5772e-c730-4293-b240-892565eb0c35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cb5772e-c730-4293-b240-892565eb0c35" (UID: "7cb5772e-c730-4293-b240-892565eb0c35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:38:52 crc kubenswrapper[4775]: I1125 20:38:52.940776 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb5772e-c730-4293-b240-892565eb0c35-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:38:52 crc kubenswrapper[4775]: I1125 20:38:52.940810 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb5772e-c730-4293-b240-892565eb0c35-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:38:52 crc kubenswrapper[4775]: I1125 20:38:52.940820 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgjh7\" (UniqueName: \"kubernetes.io/projected/7cb5772e-c730-4293-b240-892565eb0c35-kube-api-access-xgjh7\") on node \"crc\" DevicePath \"\"" Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.161802 4775 generic.go:334] "Generic (PLEG): container finished" podID="7cb5772e-c730-4293-b240-892565eb0c35" containerID="59931d9f2af34302f788e84e240da9ce336d95d3bd08e6d2fb724fe463c48fc7" exitCode=0 Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.161834 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmp4x" event={"ID":"7cb5772e-c730-4293-b240-892565eb0c35","Type":"ContainerDied","Data":"59931d9f2af34302f788e84e240da9ce336d95d3bd08e6d2fb724fe463c48fc7"} Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.161896 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmp4x" event={"ID":"7cb5772e-c730-4293-b240-892565eb0c35","Type":"ContainerDied","Data":"51dbca9394afdfaed8f1ca4939211c7fa2fcb1f5eef86eedb5a863ad6ade04d0"} Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.161889 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmp4x" Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.161929 4775 scope.go:117] "RemoveContainer" containerID="59931d9f2af34302f788e84e240da9ce336d95d3bd08e6d2fb724fe463c48fc7" Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.183118 4775 scope.go:117] "RemoveContainer" containerID="0f250b1153b0389aff7c00298499757dc71a799e64b82d8b4c0439421c836fee" Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.201771 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmp4x"] Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.217367 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmp4x"] Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.225426 4775 scope.go:117] "RemoveContainer" containerID="3ae02c4b75fd46fd69c58538adcf93c8ddf5e52bd8f5a037c89106e2260d23b8" Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.257238 4775 scope.go:117] "RemoveContainer" containerID="59931d9f2af34302f788e84e240da9ce336d95d3bd08e6d2fb724fe463c48fc7" Nov 25 20:38:53 crc kubenswrapper[4775]: E1125 20:38:53.257714 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59931d9f2af34302f788e84e240da9ce336d95d3bd08e6d2fb724fe463c48fc7\": container with ID starting with 59931d9f2af34302f788e84e240da9ce336d95d3bd08e6d2fb724fe463c48fc7 not found: ID does not exist" containerID="59931d9f2af34302f788e84e240da9ce336d95d3bd08e6d2fb724fe463c48fc7" Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.257749 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59931d9f2af34302f788e84e240da9ce336d95d3bd08e6d2fb724fe463c48fc7"} err="failed to get container status \"59931d9f2af34302f788e84e240da9ce336d95d3bd08e6d2fb724fe463c48fc7\": rpc error: code = NotFound desc = could not find container \"59931d9f2af34302f788e84e240da9ce336d95d3bd08e6d2fb724fe463c48fc7\": container with ID starting with 59931d9f2af34302f788e84e240da9ce336d95d3bd08e6d2fb724fe463c48fc7 not found: ID does not exist" Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.257769 4775 scope.go:117] "RemoveContainer" containerID="0f250b1153b0389aff7c00298499757dc71a799e64b82d8b4c0439421c836fee" Nov 25 20:38:53 crc kubenswrapper[4775]: E1125 20:38:53.258142 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f250b1153b0389aff7c00298499757dc71a799e64b82d8b4c0439421c836fee\": container with ID starting with 0f250b1153b0389aff7c00298499757dc71a799e64b82d8b4c0439421c836fee not found: ID does not exist" containerID="0f250b1153b0389aff7c00298499757dc71a799e64b82d8b4c0439421c836fee" Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.258162 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f250b1153b0389aff7c00298499757dc71a799e64b82d8b4c0439421c836fee"} err="failed to get container status \"0f250b1153b0389aff7c00298499757dc71a799e64b82d8b4c0439421c836fee\": rpc error: code = NotFound desc = could not find container \"0f250b1153b0389aff7c00298499757dc71a799e64b82d8b4c0439421c836fee\": container with ID starting with 0f250b1153b0389aff7c00298499757dc71a799e64b82d8b4c0439421c836fee not found: ID does not exist" Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.258174 4775 scope.go:117] "RemoveContainer" containerID="3ae02c4b75fd46fd69c58538adcf93c8ddf5e52bd8f5a037c89106e2260d23b8" Nov 25 20:38:53 crc kubenswrapper[4775]: E1125 20:38:53.258469 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae02c4b75fd46fd69c58538adcf93c8ddf5e52bd8f5a037c89106e2260d23b8\": container with ID starting with 3ae02c4b75fd46fd69c58538adcf93c8ddf5e52bd8f5a037c89106e2260d23b8 not found: ID does not exist" containerID="3ae02c4b75fd46fd69c58538adcf93c8ddf5e52bd8f5a037c89106e2260d23b8" Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.258492 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae02c4b75fd46fd69c58538adcf93c8ddf5e52bd8f5a037c89106e2260d23b8"} err="failed to get container status \"3ae02c4b75fd46fd69c58538adcf93c8ddf5e52bd8f5a037c89106e2260d23b8\": rpc error: code = NotFound desc = could not find container \"3ae02c4b75fd46fd69c58538adcf93c8ddf5e52bd8f5a037c89106e2260d23b8\": container with ID starting with 3ae02c4b75fd46fd69c58538adcf93c8ddf5e52bd8f5a037c89106e2260d23b8 not found: ID does not exist" Nov 25 20:38:53 crc kubenswrapper[4775]: I1125 20:38:53.855094 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:38:53 crc kubenswrapper[4775]: E1125 20:38:53.856444 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:38:54 crc kubenswrapper[4775]: I1125 20:38:54.857966 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb5772e-c730-4293-b240-892565eb0c35" path="/var/lib/kubelet/pods/7cb5772e-c730-4293-b240-892565eb0c35/volumes" Nov 25 20:38:56 crc kubenswrapper[4775]: I1125 20:38:56.848185 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:38:56 crc kubenswrapper[4775]: I1125 20:38:56.848843 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:38:56 crc kubenswrapper[4775]: E1125 20:38:56.849248 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:38:56 crc kubenswrapper[4775]: E1125 20:38:56.849252 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:39:06 crc kubenswrapper[4775]: I1125 20:39:06.850481 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:39:06 crc kubenswrapper[4775]: E1125 20:39:06.852438 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:39:10 crc kubenswrapper[4775]: I1125 20:39:10.846802 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:39:10 crc kubenswrapper[4775]: E1125 20:39:10.847545 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:39:11 crc kubenswrapper[4775]: I1125 20:39:11.847054 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:39:11 crc kubenswrapper[4775]: E1125 20:39:11.847883 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:39:17 crc kubenswrapper[4775]: I1125 20:39:17.847416 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:39:17 crc kubenswrapper[4775]: E1125 20:39:17.848367 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:39:23 crc kubenswrapper[4775]: I1125 20:39:23.847167 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:39:23 crc kubenswrapper[4775]: E1125 20:39:23.848099 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:39:25 crc kubenswrapper[4775]: I1125 20:39:25.847834 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:39:26 crc kubenswrapper[4775]: I1125 20:39:26.505378 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"bd572fa2a47f60affb8aab47a81fb2345abd456d668c494e908c2d429aef871f"} Nov 25 20:39:28 crc kubenswrapper[4775]: I1125 20:39:28.858055 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:39:28 crc kubenswrapper[4775]: E1125 20:39:28.858937 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:39:38 crc kubenswrapper[4775]: I1125 20:39:38.855884 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:39:38 crc kubenswrapper[4775]: E1125 20:39:38.856705 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:39:42 crc kubenswrapper[4775]: I1125 20:39:42.848201 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:39:42 crc kubenswrapper[4775]: E1125 20:39:42.849144 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.254853 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pdjpd"] Nov 25 20:39:49 crc kubenswrapper[4775]: E1125 20:39:49.255812 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb5772e-c730-4293-b240-892565eb0c35" containerName="extract-utilities" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.255826 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb5772e-c730-4293-b240-892565eb0c35" containerName="extract-utilities" Nov 25 20:39:49 crc kubenswrapper[4775]: E1125 20:39:49.255841 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb5772e-c730-4293-b240-892565eb0c35" containerName="extract-content" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.255847 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb5772e-c730-4293-b240-892565eb0c35" containerName="extract-content" Nov 25 20:39:49 crc kubenswrapper[4775]: E1125 20:39:49.255860 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb5772e-c730-4293-b240-892565eb0c35" containerName="registry-server" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.255866 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb5772e-c730-4293-b240-892565eb0c35" containerName="registry-server" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.256085 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb5772e-c730-4293-b240-892565eb0c35" containerName="registry-server" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.257665 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.281215 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pdjpd"] Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.408490 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e846418f-53e2-4bef-af7f-cd277d2b48ba-catalog-content\") pod \"redhat-operators-pdjpd\" (UID: \"e846418f-53e2-4bef-af7f-cd277d2b48ba\") " pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.408568 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xtg4\" (UniqueName: \"kubernetes.io/projected/e846418f-53e2-4bef-af7f-cd277d2b48ba-kube-api-access-5xtg4\") pod \"redhat-operators-pdjpd\" (UID: \"e846418f-53e2-4bef-af7f-cd277d2b48ba\") " pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.408597 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e846418f-53e2-4bef-af7f-cd277d2b48ba-utilities\") pod \"redhat-operators-pdjpd\" (UID: \"e846418f-53e2-4bef-af7f-cd277d2b48ba\") " pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.510610 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e846418f-53e2-4bef-af7f-cd277d2b48ba-catalog-content\") pod \"redhat-operators-pdjpd\" (UID: \"e846418f-53e2-4bef-af7f-cd277d2b48ba\") " pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.510736 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xtg4\" (UniqueName: \"kubernetes.io/projected/e846418f-53e2-4bef-af7f-cd277d2b48ba-kube-api-access-5xtg4\") pod \"redhat-operators-pdjpd\" (UID: \"e846418f-53e2-4bef-af7f-cd277d2b48ba\") " pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.510771 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e846418f-53e2-4bef-af7f-cd277d2b48ba-utilities\") pod \"redhat-operators-pdjpd\" (UID: \"e846418f-53e2-4bef-af7f-cd277d2b48ba\") " pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.511182 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e846418f-53e2-4bef-af7f-cd277d2b48ba-catalog-content\") pod \"redhat-operators-pdjpd\" (UID: \"e846418f-53e2-4bef-af7f-cd277d2b48ba\") " pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.511301 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e846418f-53e2-4bef-af7f-cd277d2b48ba-utilities\") pod \"redhat-operators-pdjpd\" (UID: \"e846418f-53e2-4bef-af7f-cd277d2b48ba\") " pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.539774 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xtg4\" (UniqueName: \"kubernetes.io/projected/e846418f-53e2-4bef-af7f-cd277d2b48ba-kube-api-access-5xtg4\") pod \"redhat-operators-pdjpd\" (UID: \"e846418f-53e2-4bef-af7f-cd277d2b48ba\") " pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.592881 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:49 crc kubenswrapper[4775]: I1125 20:39:49.847376 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:39:49 crc kubenswrapper[4775]: E1125 20:39:49.848110 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:39:50 crc kubenswrapper[4775]: I1125 20:39:50.110491 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pdjpd"] Nov 25 20:39:50 crc kubenswrapper[4775]: I1125 20:39:50.784995 4775 generic.go:334] "Generic (PLEG): container finished" podID="e846418f-53e2-4bef-af7f-cd277d2b48ba" containerID="8fc74cd04cf32f55b90b4d47f29952f37044a14ef022b17537519850cd658fca" exitCode=0 Nov 25 20:39:50 crc kubenswrapper[4775]: I1125 20:39:50.785363 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdjpd" event={"ID":"e846418f-53e2-4bef-af7f-cd277d2b48ba","Type":"ContainerDied","Data":"8fc74cd04cf32f55b90b4d47f29952f37044a14ef022b17537519850cd658fca"} Nov 25 20:39:50 crc kubenswrapper[4775]: I1125 20:39:50.785401 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdjpd" event={"ID":"e846418f-53e2-4bef-af7f-cd277d2b48ba","Type":"ContainerStarted","Data":"47ac1e29cffdaffeb246634b1c602c12fe65dc492905e43ff0c8d2fbbcd4f93e"} Nov 25 20:39:51 crc kubenswrapper[4775]: I1125 20:39:51.798792 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdjpd" event={"ID":"e846418f-53e2-4bef-af7f-cd277d2b48ba","Type":"ContainerStarted","Data":"84ed18f4c276669e2c8d9b4811b4e691fb1c867d3a62f9aa5cd781e64785b32c"} Nov 25 20:39:52 crc kubenswrapper[4775]: I1125 20:39:52.815515 4775 generic.go:334] "Generic (PLEG): container finished" podID="e846418f-53e2-4bef-af7f-cd277d2b48ba" containerID="84ed18f4c276669e2c8d9b4811b4e691fb1c867d3a62f9aa5cd781e64785b32c" exitCode=0 Nov 25 20:39:52 crc kubenswrapper[4775]: I1125 20:39:52.815573 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdjpd" event={"ID":"e846418f-53e2-4bef-af7f-cd277d2b48ba","Type":"ContainerDied","Data":"84ed18f4c276669e2c8d9b4811b4e691fb1c867d3a62f9aa5cd781e64785b32c"} Nov 25 20:39:53 crc kubenswrapper[4775]: I1125 20:39:53.832365 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdjpd" event={"ID":"e846418f-53e2-4bef-af7f-cd277d2b48ba","Type":"ContainerStarted","Data":"1d3db334939116bf4de3778e223a92c7315889ab4e5732bca365e5c4815c6be1"} Nov 25 20:39:53 crc kubenswrapper[4775]: I1125 20:39:53.866562 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pdjpd" podStartSLOduration=2.120148264 podStartE2EDuration="4.866541173s" podCreationTimestamp="2025-11-25 20:39:49 +0000 UTC" firstStartedPulling="2025-11-25 20:39:50.79177732 +0000 UTC m=+3972.708139686" lastFinishedPulling="2025-11-25 20:39:53.538170209 +0000 UTC m=+3975.454532595" observedRunningTime="2025-11-25 20:39:53.854586401 +0000 UTC m=+3975.770948777" watchObservedRunningTime="2025-11-25 20:39:53.866541173 +0000 UTC m=+3975.782903559" Nov 25 20:39:56 crc kubenswrapper[4775]: I1125 20:39:56.848820 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:39:56 crc kubenswrapper[4775]: E1125 20:39:56.849702 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:39:59 crc kubenswrapper[4775]: I1125 20:39:59.593872 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:59 crc kubenswrapper[4775]: I1125 20:39:59.594223 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:59 crc kubenswrapper[4775]: I1125 20:39:59.658353 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:39:59 crc kubenswrapper[4775]: I1125 20:39:59.946776 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:40:00 crc kubenswrapper[4775]: I1125 20:40:00.008248 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pdjpd"] Nov 25 20:40:01 crc kubenswrapper[4775]: I1125 20:40:01.908962 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pdjpd" podUID="e846418f-53e2-4bef-af7f-cd277d2b48ba" containerName="registry-server" containerID="cri-o://1d3db334939116bf4de3778e223a92c7315889ab4e5732bca365e5c4815c6be1" gracePeriod=2 Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.450416 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.602621 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xtg4\" (UniqueName: \"kubernetes.io/projected/e846418f-53e2-4bef-af7f-cd277d2b48ba-kube-api-access-5xtg4\") pod \"e846418f-53e2-4bef-af7f-cd277d2b48ba\" (UID: \"e846418f-53e2-4bef-af7f-cd277d2b48ba\") " Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.603058 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e846418f-53e2-4bef-af7f-cd277d2b48ba-catalog-content\") pod \"e846418f-53e2-4bef-af7f-cd277d2b48ba\" (UID: \"e846418f-53e2-4bef-af7f-cd277d2b48ba\") " Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.603182 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e846418f-53e2-4bef-af7f-cd277d2b48ba-utilities\") pod \"e846418f-53e2-4bef-af7f-cd277d2b48ba\" (UID: \"e846418f-53e2-4bef-af7f-cd277d2b48ba\") " Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.603796 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e846418f-53e2-4bef-af7f-cd277d2b48ba-utilities" (OuterVolumeSpecName: "utilities") pod "e846418f-53e2-4bef-af7f-cd277d2b48ba" (UID: "e846418f-53e2-4bef-af7f-cd277d2b48ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.604058 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e846418f-53e2-4bef-af7f-cd277d2b48ba-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.610549 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e846418f-53e2-4bef-af7f-cd277d2b48ba-kube-api-access-5xtg4" (OuterVolumeSpecName: "kube-api-access-5xtg4") pod "e846418f-53e2-4bef-af7f-cd277d2b48ba" (UID: "e846418f-53e2-4bef-af7f-cd277d2b48ba"). InnerVolumeSpecName "kube-api-access-5xtg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.706820 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xtg4\" (UniqueName: \"kubernetes.io/projected/e846418f-53e2-4bef-af7f-cd277d2b48ba-kube-api-access-5xtg4\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.723601 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e846418f-53e2-4bef-af7f-cd277d2b48ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e846418f-53e2-4bef-af7f-cd277d2b48ba" (UID: "e846418f-53e2-4bef-af7f-cd277d2b48ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.808108 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e846418f-53e2-4bef-af7f-cd277d2b48ba-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.930690 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdjpd" Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.930704 4775 generic.go:334] "Generic (PLEG): container finished" podID="e846418f-53e2-4bef-af7f-cd277d2b48ba" containerID="1d3db334939116bf4de3778e223a92c7315889ab4e5732bca365e5c4815c6be1" exitCode=0 Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.930752 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdjpd" event={"ID":"e846418f-53e2-4bef-af7f-cd277d2b48ba","Type":"ContainerDied","Data":"1d3db334939116bf4de3778e223a92c7315889ab4e5732bca365e5c4815c6be1"} Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.930865 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdjpd" event={"ID":"e846418f-53e2-4bef-af7f-cd277d2b48ba","Type":"ContainerDied","Data":"47ac1e29cffdaffeb246634b1c602c12fe65dc492905e43ff0c8d2fbbcd4f93e"} Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.930907 4775 scope.go:117] "RemoveContainer" containerID="1d3db334939116bf4de3778e223a92c7315889ab4e5732bca365e5c4815c6be1" Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.974369 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pdjpd"] Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.984416 4775 scope.go:117] "RemoveContainer" containerID="84ed18f4c276669e2c8d9b4811b4e691fb1c867d3a62f9aa5cd781e64785b32c" Nov 25 20:40:02 crc kubenswrapper[4775]: I1125 20:40:02.988224 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pdjpd"] Nov 25 20:40:03 crc kubenswrapper[4775]: I1125 20:40:03.025346 4775 scope.go:117] "RemoveContainer" containerID="8fc74cd04cf32f55b90b4d47f29952f37044a14ef022b17537519850cd658fca" Nov 25 20:40:03 crc kubenswrapper[4775]: I1125 20:40:03.093066 4775 scope.go:117] "RemoveContainer" containerID="1d3db334939116bf4de3778e223a92c7315889ab4e5732bca365e5c4815c6be1" Nov 25 20:40:03 crc kubenswrapper[4775]: E1125 20:40:03.093591 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d3db334939116bf4de3778e223a92c7315889ab4e5732bca365e5c4815c6be1\": container with ID starting with 1d3db334939116bf4de3778e223a92c7315889ab4e5732bca365e5c4815c6be1 not found: ID does not exist" containerID="1d3db334939116bf4de3778e223a92c7315889ab4e5732bca365e5c4815c6be1" Nov 25 20:40:03 crc kubenswrapper[4775]: I1125 20:40:03.093722 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d3db334939116bf4de3778e223a92c7315889ab4e5732bca365e5c4815c6be1"} err="failed to get container status \"1d3db334939116bf4de3778e223a92c7315889ab4e5732bca365e5c4815c6be1\": rpc error: code = NotFound desc = could not find container \"1d3db334939116bf4de3778e223a92c7315889ab4e5732bca365e5c4815c6be1\": container with ID starting with 1d3db334939116bf4de3778e223a92c7315889ab4e5732bca365e5c4815c6be1 not found: ID does not exist" Nov 25 20:40:03 crc kubenswrapper[4775]: I1125 20:40:03.093757 4775 scope.go:117] "RemoveContainer" containerID="84ed18f4c276669e2c8d9b4811b4e691fb1c867d3a62f9aa5cd781e64785b32c" Nov 25 20:40:03 crc kubenswrapper[4775]: E1125 20:40:03.094240 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ed18f4c276669e2c8d9b4811b4e691fb1c867d3a62f9aa5cd781e64785b32c\": container with ID starting with 84ed18f4c276669e2c8d9b4811b4e691fb1c867d3a62f9aa5cd781e64785b32c not found: ID does not exist" containerID="84ed18f4c276669e2c8d9b4811b4e691fb1c867d3a62f9aa5cd781e64785b32c" Nov 25 20:40:03 crc kubenswrapper[4775]: I1125 20:40:03.094309 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ed18f4c276669e2c8d9b4811b4e691fb1c867d3a62f9aa5cd781e64785b32c"} err="failed to get container status \"84ed18f4c276669e2c8d9b4811b4e691fb1c867d3a62f9aa5cd781e64785b32c\": rpc error: code = NotFound desc = could not find container \"84ed18f4c276669e2c8d9b4811b4e691fb1c867d3a62f9aa5cd781e64785b32c\": container with ID starting with 84ed18f4c276669e2c8d9b4811b4e691fb1c867d3a62f9aa5cd781e64785b32c not found: ID does not exist" Nov 25 20:40:03 crc kubenswrapper[4775]: I1125 20:40:03.094343 4775 scope.go:117] "RemoveContainer" containerID="8fc74cd04cf32f55b90b4d47f29952f37044a14ef022b17537519850cd658fca" Nov 25 20:40:03 crc kubenswrapper[4775]: E1125 20:40:03.094693 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc74cd04cf32f55b90b4d47f29952f37044a14ef022b17537519850cd658fca\": container with ID starting with 8fc74cd04cf32f55b90b4d47f29952f37044a14ef022b17537519850cd658fca not found: ID does not exist" containerID="8fc74cd04cf32f55b90b4d47f29952f37044a14ef022b17537519850cd658fca" Nov 25 20:40:03 crc kubenswrapper[4775]: I1125 20:40:03.094734 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc74cd04cf32f55b90b4d47f29952f37044a14ef022b17537519850cd658fca"} err="failed to get container status \"8fc74cd04cf32f55b90b4d47f29952f37044a14ef022b17537519850cd658fca\": rpc error: code = NotFound desc = could not find container \"8fc74cd04cf32f55b90b4d47f29952f37044a14ef022b17537519850cd658fca\": container with ID starting with 8fc74cd04cf32f55b90b4d47f29952f37044a14ef022b17537519850cd658fca not found: ID does not exist" Nov 25 20:40:04 crc kubenswrapper[4775]: I1125 20:40:04.847173 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:40:04 crc kubenswrapper[4775]: E1125 20:40:04.847778 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:40:04 crc kubenswrapper[4775]: I1125 20:40:04.860449 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e846418f-53e2-4bef-af7f-cd277d2b48ba" path="/var/lib/kubelet/pods/e846418f-53e2-4bef-af7f-cd277d2b48ba/volumes" Nov 25 20:40:08 crc kubenswrapper[4775]: I1125 20:40:08.859615 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:40:08 crc kubenswrapper[4775]: E1125 20:40:08.860509 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:40:15 crc kubenswrapper[4775]: I1125 20:40:15.848560 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:40:15 crc kubenswrapper[4775]: E1125 20:40:15.850639 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:40:22 crc kubenswrapper[4775]: I1125 20:40:22.847907 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:40:22 crc kubenswrapper[4775]: E1125 20:40:22.848809 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.540011 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-95bxk"] Nov 25 20:40:25 crc kubenswrapper[4775]: E1125 20:40:25.540925 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e846418f-53e2-4bef-af7f-cd277d2b48ba" containerName="extract-content" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.540942 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e846418f-53e2-4bef-af7f-cd277d2b48ba" containerName="extract-content" Nov 25 20:40:25 crc kubenswrapper[4775]: E1125 20:40:25.540982 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e846418f-53e2-4bef-af7f-cd277d2b48ba" containerName="extract-utilities" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.540989 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e846418f-53e2-4bef-af7f-cd277d2b48ba" containerName="extract-utilities" Nov 25 20:40:25 crc kubenswrapper[4775]: E1125 20:40:25.541008 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e846418f-53e2-4bef-af7f-cd277d2b48ba" containerName="registry-server" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.541016 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e846418f-53e2-4bef-af7f-cd277d2b48ba" containerName="registry-server" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.541274 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e846418f-53e2-4bef-af7f-cd277d2b48ba" containerName="registry-server" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.542731 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.558279 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-95bxk"] Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.661343 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8w9n\" (UniqueName: \"kubernetes.io/projected/26338132-cdca-4677-a600-9f5fbddf07eb-kube-api-access-f8w9n\") pod \"community-operators-95bxk\" (UID: \"26338132-cdca-4677-a600-9f5fbddf07eb\") " pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.661740 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26338132-cdca-4677-a600-9f5fbddf07eb-utilities\") pod \"community-operators-95bxk\" (UID: \"26338132-cdca-4677-a600-9f5fbddf07eb\") " pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.661811 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26338132-cdca-4677-a600-9f5fbddf07eb-catalog-content\") pod \"community-operators-95bxk\" (UID: \"26338132-cdca-4677-a600-9f5fbddf07eb\") " pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.763530 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8w9n\" (UniqueName: \"kubernetes.io/projected/26338132-cdca-4677-a600-9f5fbddf07eb-kube-api-access-f8w9n\") pod \"community-operators-95bxk\" (UID: \"26338132-cdca-4677-a600-9f5fbddf07eb\") " pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.763983 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26338132-cdca-4677-a600-9f5fbddf07eb-utilities\") pod \"community-operators-95bxk\" (UID: \"26338132-cdca-4677-a600-9f5fbddf07eb\") " pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.764007 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26338132-cdca-4677-a600-9f5fbddf07eb-catalog-content\") pod \"community-operators-95bxk\" (UID: \"26338132-cdca-4677-a600-9f5fbddf07eb\") " pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.764440 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26338132-cdca-4677-a600-9f5fbddf07eb-catalog-content\") pod \"community-operators-95bxk\" (UID: \"26338132-cdca-4677-a600-9f5fbddf07eb\") " pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.765150 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26338132-cdca-4677-a600-9f5fbddf07eb-utilities\") pod \"community-operators-95bxk\" (UID: \"26338132-cdca-4677-a600-9f5fbddf07eb\") " pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.786729 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8w9n\" (UniqueName: \"kubernetes.io/projected/26338132-cdca-4677-a600-9f5fbddf07eb-kube-api-access-f8w9n\") pod \"community-operators-95bxk\" (UID: \"26338132-cdca-4677-a600-9f5fbddf07eb\") " pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:25 crc kubenswrapper[4775]: I1125 20:40:25.867555 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:26 crc kubenswrapper[4775]: I1125 20:40:26.377365 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-95bxk"] Nov 25 20:40:27 crc kubenswrapper[4775]: I1125 20:40:27.190573 4775 generic.go:334] "Generic (PLEG): container finished" podID="26338132-cdca-4677-a600-9f5fbddf07eb" containerID="5f3b0047c2b2e4e2595463433bbbcbb88d3d64653a3b1b26ab76e0d3930fd0e2" exitCode=0 Nov 25 20:40:27 crc kubenswrapper[4775]: I1125 20:40:27.191035 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95bxk" event={"ID":"26338132-cdca-4677-a600-9f5fbddf07eb","Type":"ContainerDied","Data":"5f3b0047c2b2e4e2595463433bbbcbb88d3d64653a3b1b26ab76e0d3930fd0e2"} Nov 25 20:40:27 crc kubenswrapper[4775]: I1125 20:40:27.191095 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95bxk" event={"ID":"26338132-cdca-4677-a600-9f5fbddf07eb","Type":"ContainerStarted","Data":"5f2ceab66f1deea9505e45fac2ccfb873a41b3288138a48b957e30bcff0c4c80"} Nov 25 20:40:29 crc kubenswrapper[4775]: I1125 20:40:29.214367 4775 generic.go:334] "Generic (PLEG): container finished" podID="26338132-cdca-4677-a600-9f5fbddf07eb" containerID="d878b438a3cf088358831b2185c24472deba773c7a266ced389fb3cb3ca49c2e" exitCode=0 Nov 25 20:40:29 crc kubenswrapper[4775]: I1125 20:40:29.214569 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95bxk" event={"ID":"26338132-cdca-4677-a600-9f5fbddf07eb","Type":"ContainerDied","Data":"d878b438a3cf088358831b2185c24472deba773c7a266ced389fb3cb3ca49c2e"} Nov 25 20:40:29 crc kubenswrapper[4775]: I1125 20:40:29.847343 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:40:30 crc kubenswrapper[4775]: I1125 20:40:30.233594 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95bxk" event={"ID":"26338132-cdca-4677-a600-9f5fbddf07eb","Type":"ContainerStarted","Data":"764d96fcd4d77e3c673952836a45485a7d11d7cc5d0f2be56124a0a332496671"} Nov 25 20:40:30 crc kubenswrapper[4775]: I1125 20:40:30.272437 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-95bxk" podStartSLOduration=2.843063192 podStartE2EDuration="5.272416462s" podCreationTimestamp="2025-11-25 20:40:25 +0000 UTC" firstStartedPulling="2025-11-25 20:40:27.196451717 +0000 UTC m=+4009.112814083" lastFinishedPulling="2025-11-25 20:40:29.625804967 +0000 UTC m=+4011.542167353" observedRunningTime="2025-11-25 20:40:30.261994243 +0000 UTC m=+4012.178356619" watchObservedRunningTime="2025-11-25 20:40:30.272416462 +0000 UTC m=+4012.188778838" Nov 25 20:40:31 crc kubenswrapper[4775]: I1125 20:40:31.250118 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"d64b2a3b1b8c1faedf107bea4b30b5094cc81f7b68c48377ecbe5d63ebf762ba"} Nov 25 20:40:31 crc kubenswrapper[4775]: I1125 20:40:31.250828 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:40:34 crc kubenswrapper[4775]: I1125 20:40:34.847050 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:40:34 crc kubenswrapper[4775]: E1125 20:40:34.847993 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:40:35 crc kubenswrapper[4775]: I1125 20:40:35.869023 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:35 crc kubenswrapper[4775]: I1125 20:40:35.869109 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:35 crc kubenswrapper[4775]: I1125 20:40:35.950904 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:36 crc kubenswrapper[4775]: I1125 20:40:36.379901 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:36 crc kubenswrapper[4775]: I1125 20:40:36.442314 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-95bxk"] Nov 25 20:40:38 crc kubenswrapper[4775]: I1125 20:40:38.323141 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-95bxk" podUID="26338132-cdca-4677-a600-9f5fbddf07eb" containerName="registry-server" containerID="cri-o://764d96fcd4d77e3c673952836a45485a7d11d7cc5d0f2be56124a0a332496671" gracePeriod=2 Nov 25 20:40:38 crc kubenswrapper[4775]: I1125 20:40:38.857511 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:38 crc kubenswrapper[4775]: I1125 20:40:38.963350 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26338132-cdca-4677-a600-9f5fbddf07eb-utilities\") pod \"26338132-cdca-4677-a600-9f5fbddf07eb\" (UID: \"26338132-cdca-4677-a600-9f5fbddf07eb\") " Nov 25 20:40:38 crc kubenswrapper[4775]: I1125 20:40:38.963528 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26338132-cdca-4677-a600-9f5fbddf07eb-catalog-content\") pod \"26338132-cdca-4677-a600-9f5fbddf07eb\" (UID: \"26338132-cdca-4677-a600-9f5fbddf07eb\") " Nov 25 20:40:38 crc kubenswrapper[4775]: I1125 20:40:38.963677 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8w9n\" (UniqueName: \"kubernetes.io/projected/26338132-cdca-4677-a600-9f5fbddf07eb-kube-api-access-f8w9n\") pod \"26338132-cdca-4677-a600-9f5fbddf07eb\" (UID: \"26338132-cdca-4677-a600-9f5fbddf07eb\") " Nov 25 20:40:38 crc kubenswrapper[4775]: I1125 20:40:38.964321 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26338132-cdca-4677-a600-9f5fbddf07eb-utilities" (OuterVolumeSpecName: "utilities") pod "26338132-cdca-4677-a600-9f5fbddf07eb" (UID: "26338132-cdca-4677-a600-9f5fbddf07eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:40:38 crc kubenswrapper[4775]: I1125 20:40:38.980023 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26338132-cdca-4677-a600-9f5fbddf07eb-kube-api-access-f8w9n" (OuterVolumeSpecName: "kube-api-access-f8w9n") pod "26338132-cdca-4677-a600-9f5fbddf07eb" (UID: "26338132-cdca-4677-a600-9f5fbddf07eb"). InnerVolumeSpecName "kube-api-access-f8w9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.066015 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8w9n\" (UniqueName: \"kubernetes.io/projected/26338132-cdca-4677-a600-9f5fbddf07eb-kube-api-access-f8w9n\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.066041 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26338132-cdca-4677-a600-9f5fbddf07eb-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.339128 4775 generic.go:334] "Generic (PLEG): container finished" podID="26338132-cdca-4677-a600-9f5fbddf07eb" containerID="764d96fcd4d77e3c673952836a45485a7d11d7cc5d0f2be56124a0a332496671" exitCode=0 Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.339189 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95bxk" event={"ID":"26338132-cdca-4677-a600-9f5fbddf07eb","Type":"ContainerDied","Data":"764d96fcd4d77e3c673952836a45485a7d11d7cc5d0f2be56124a0a332496671"} Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.339240 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95bxk" event={"ID":"26338132-cdca-4677-a600-9f5fbddf07eb","Type":"ContainerDied","Data":"5f2ceab66f1deea9505e45fac2ccfb873a41b3288138a48b957e30bcff0c4c80"} Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.339248 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95bxk" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.339269 4775 scope.go:117] "RemoveContainer" containerID="764d96fcd4d77e3c673952836a45485a7d11d7cc5d0f2be56124a0a332496671" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.382604 4775 scope.go:117] "RemoveContainer" containerID="d878b438a3cf088358831b2185c24472deba773c7a266ced389fb3cb3ca49c2e" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.419014 4775 scope.go:117] "RemoveContainer" containerID="5f3b0047c2b2e4e2595463433bbbcbb88d3d64653a3b1b26ab76e0d3930fd0e2" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.479233 4775 scope.go:117] "RemoveContainer" containerID="764d96fcd4d77e3c673952836a45485a7d11d7cc5d0f2be56124a0a332496671" Nov 25 20:40:39 crc kubenswrapper[4775]: E1125 20:40:39.480280 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"764d96fcd4d77e3c673952836a45485a7d11d7cc5d0f2be56124a0a332496671\": container with ID starting with 764d96fcd4d77e3c673952836a45485a7d11d7cc5d0f2be56124a0a332496671 not found: ID does not exist" containerID="764d96fcd4d77e3c673952836a45485a7d11d7cc5d0f2be56124a0a332496671" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.480332 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"764d96fcd4d77e3c673952836a45485a7d11d7cc5d0f2be56124a0a332496671"} err="failed to get container status \"764d96fcd4d77e3c673952836a45485a7d11d7cc5d0f2be56124a0a332496671\": rpc error: code = NotFound desc = could not find container \"764d96fcd4d77e3c673952836a45485a7d11d7cc5d0f2be56124a0a332496671\": container with ID starting with 764d96fcd4d77e3c673952836a45485a7d11d7cc5d0f2be56124a0a332496671 not found: ID does not exist" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.480368 4775 scope.go:117] "RemoveContainer" containerID="d878b438a3cf088358831b2185c24472deba773c7a266ced389fb3cb3ca49c2e" Nov 25 20:40:39 crc kubenswrapper[4775]: E1125 20:40:39.480832 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d878b438a3cf088358831b2185c24472deba773c7a266ced389fb3cb3ca49c2e\": container with ID starting with d878b438a3cf088358831b2185c24472deba773c7a266ced389fb3cb3ca49c2e not found: ID does not exist" containerID="d878b438a3cf088358831b2185c24472deba773c7a266ced389fb3cb3ca49c2e" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.480871 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d878b438a3cf088358831b2185c24472deba773c7a266ced389fb3cb3ca49c2e"} err="failed to get container status \"d878b438a3cf088358831b2185c24472deba773c7a266ced389fb3cb3ca49c2e\": rpc error: code = NotFound desc = could not find container \"d878b438a3cf088358831b2185c24472deba773c7a266ced389fb3cb3ca49c2e\": container with ID starting with d878b438a3cf088358831b2185c24472deba773c7a266ced389fb3cb3ca49c2e not found: ID does not exist" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.480901 4775 scope.go:117] "RemoveContainer" containerID="5f3b0047c2b2e4e2595463433bbbcbb88d3d64653a3b1b26ab76e0d3930fd0e2" Nov 25 20:40:39 crc kubenswrapper[4775]: E1125 20:40:39.481245 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f3b0047c2b2e4e2595463433bbbcbb88d3d64653a3b1b26ab76e0d3930fd0e2\": container with ID starting with 5f3b0047c2b2e4e2595463433bbbcbb88d3d64653a3b1b26ab76e0d3930fd0e2 not found: ID does not exist" containerID="5f3b0047c2b2e4e2595463433bbbcbb88d3d64653a3b1b26ab76e0d3930fd0e2" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.481384 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f3b0047c2b2e4e2595463433bbbcbb88d3d64653a3b1b26ab76e0d3930fd0e2"} err="failed to get container status \"5f3b0047c2b2e4e2595463433bbbcbb88d3d64653a3b1b26ab76e0d3930fd0e2\": rpc error: code = NotFound desc = could not find container \"5f3b0047c2b2e4e2595463433bbbcbb88d3d64653a3b1b26ab76e0d3930fd0e2\": container with ID starting with 5f3b0047c2b2e4e2595463433bbbcbb88d3d64653a3b1b26ab76e0d3930fd0e2 not found: ID does not exist" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.532940 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26338132-cdca-4677-a600-9f5fbddf07eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26338132-cdca-4677-a600-9f5fbddf07eb" (UID: "26338132-cdca-4677-a600-9f5fbddf07eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.576274 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26338132-cdca-4677-a600-9f5fbddf07eb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.704867 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-95bxk"] Nov 25 20:40:39 crc kubenswrapper[4775]: I1125 20:40:39.723515 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-95bxk"] Nov 25 20:40:40 crc kubenswrapper[4775]: I1125 20:40:40.866897 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26338132-cdca-4677-a600-9f5fbddf07eb" path="/var/lib/kubelet/pods/26338132-cdca-4677-a600-9f5fbddf07eb/volumes" Nov 25 20:40:43 crc kubenswrapper[4775]: I1125 20:40:43.192397 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:40:43 crc kubenswrapper[4775]: I1125 20:40:43.236831 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:40:49 crc kubenswrapper[4775]: I1125 20:40:49.847442 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:40:49 crc kubenswrapper[4775]: E1125 20:40:49.848352 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:40:53 crc kubenswrapper[4775]: I1125 20:40:53.211689 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:40:53 crc kubenswrapper[4775]: I1125 20:40:53.331431 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:41:02 crc kubenswrapper[4775]: I1125 20:41:02.208968 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:41:02 crc kubenswrapper[4775]: I1125 20:41:02.209027 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:41:02 crc kubenswrapper[4775]: I1125 20:41:02.209703 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:41:02 crc kubenswrapper[4775]: I1125 20:41:02.210843 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"d64b2a3b1b8c1faedf107bea4b30b5094cc81f7b68c48377ecbe5d63ebf762ba"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:41:02 crc kubenswrapper[4775]: I1125 20:41:02.210895 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://d64b2a3b1b8c1faedf107bea4b30b5094cc81f7b68c48377ecbe5d63ebf762ba" gracePeriod=30 Nov 25 20:41:02 crc kubenswrapper[4775]: I1125 20:41:02.219545 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:41:02 crc kubenswrapper[4775]: I1125 20:41:02.846851 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:41:02 crc kubenswrapper[4775]: E1125 20:41:02.847189 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:41:05 crc kubenswrapper[4775]: I1125 20:41:05.643803 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="d64b2a3b1b8c1faedf107bea4b30b5094cc81f7b68c48377ecbe5d63ebf762ba" exitCode=0 Nov 25 20:41:05 crc kubenswrapper[4775]: I1125 20:41:05.643843 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"d64b2a3b1b8c1faedf107bea4b30b5094cc81f7b68c48377ecbe5d63ebf762ba"} Nov 25 20:41:05 crc kubenswrapper[4775]: I1125 20:41:05.644459 4775 scope.go:117] "RemoveContainer" containerID="beeafc3af90821b9cfeed3638d91728fac9e75ff43b69b7fdbc3016c031037aa" Nov 25 20:41:06 crc kubenswrapper[4775]: I1125 20:41:06.664447 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb"} Nov 25 20:41:06 crc kubenswrapper[4775]: I1125 20:41:06.664959 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:41:14 crc kubenswrapper[4775]: I1125 20:41:14.848173 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:41:14 crc kubenswrapper[4775]: E1125 20:41:14.849349 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:41:19 crc kubenswrapper[4775]: I1125 20:41:19.912309 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="97e9f968-e12b-413d-a36b-7a2f16d0b1ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 20:41:23 crc kubenswrapper[4775]: I1125 20:41:23.204564 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:41:23 crc kubenswrapper[4775]: I1125 20:41:23.221235 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:41:28 crc kubenswrapper[4775]: I1125 20:41:28.852102 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:41:28 crc kubenswrapper[4775]: E1125 20:41:28.852729 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:41:33 crc kubenswrapper[4775]: I1125 20:41:33.063809 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:41:33 crc kubenswrapper[4775]: I1125 20:41:33.158387 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:41:41 crc kubenswrapper[4775]: I1125 20:41:41.071243 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:41:41 crc kubenswrapper[4775]: I1125 20:41:41.072122 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:41:42 crc kubenswrapper[4775]: I1125 20:41:42.204394 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:41:42 crc kubenswrapper[4775]: I1125 20:41:42.204497 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:41:42 crc kubenswrapper[4775]: I1125 20:41:42.205770 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:41:42 crc kubenswrapper[4775]: I1125 20:41:42.205865 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" gracePeriod=30 Nov 25 20:41:42 crc kubenswrapper[4775]: I1125 20:41:42.207005 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:41:42 crc kubenswrapper[4775]: I1125 20:41:42.218100 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="Get \"https://10.217.0.245:8786/healthcheck\": EOF" Nov 25 20:41:43 crc kubenswrapper[4775]: I1125 20:41:43.848214 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:41:43 crc kubenswrapper[4775]: E1125 20:41:43.848761 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:41:45 crc kubenswrapper[4775]: E1125 20:41:45.351760 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:41:46 crc kubenswrapper[4775]: I1125 20:41:46.083944 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" exitCode=0 Nov 25 20:41:46 crc kubenswrapper[4775]: I1125 20:41:46.084028 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb"} Nov 25 20:41:46 crc kubenswrapper[4775]: I1125 20:41:46.084313 4775 scope.go:117] "RemoveContainer" containerID="d64b2a3b1b8c1faedf107bea4b30b5094cc81f7b68c48377ecbe5d63ebf762ba" Nov 25 20:41:46 crc kubenswrapper[4775]: I1125 20:41:46.084989 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:41:46 crc kubenswrapper[4775]: E1125 20:41:46.085282 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:41:58 crc kubenswrapper[4775]: I1125 20:41:58.855199 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:41:58 crc kubenswrapper[4775]: I1125 20:41:58.855909 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:41:58 crc kubenswrapper[4775]: E1125 20:41:58.856134 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:41:58 crc kubenswrapper[4775]: E1125 20:41:58.862530 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:42:09 crc kubenswrapper[4775]: I1125 20:42:09.848189 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:42:09 crc kubenswrapper[4775]: E1125 20:42:09.848859 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:42:11 crc kubenswrapper[4775]: I1125 20:42:11.070521 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:42:11 crc kubenswrapper[4775]: I1125 20:42:11.071497 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:42:11 crc kubenswrapper[4775]: I1125 20:42:11.847317 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:42:11 crc kubenswrapper[4775]: E1125 20:42:11.847606 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:42:23 crc kubenswrapper[4775]: I1125 20:42:23.847952 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:42:23 crc kubenswrapper[4775]: E1125 20:42:23.850679 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:42:26 crc kubenswrapper[4775]: I1125 20:42:26.848017 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:42:26 crc kubenswrapper[4775]: E1125 20:42:26.849085 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:42:38 crc kubenswrapper[4775]: I1125 20:42:38.863030 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:42:38 crc kubenswrapper[4775]: I1125 20:42:38.864226 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:42:38 crc kubenswrapper[4775]: E1125 20:42:38.864871 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:42:39 crc kubenswrapper[4775]: I1125 20:42:39.681395 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1"} Nov 25 20:42:41 crc kubenswrapper[4775]: I1125 20:42:41.071170 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:42:41 crc kubenswrapper[4775]: I1125 20:42:41.071726 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:42:41 crc kubenswrapper[4775]: I1125 20:42:41.071811 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 20:42:41 crc kubenswrapper[4775]: I1125 20:42:41.073169 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd572fa2a47f60affb8aab47a81fb2345abd456d668c494e908c2d429aef871f"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:42:41 crc kubenswrapper[4775]: I1125 20:42:41.073276 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://bd572fa2a47f60affb8aab47a81fb2345abd456d668c494e908c2d429aef871f" gracePeriod=600 Nov 25 20:42:41 crc kubenswrapper[4775]: I1125 20:42:41.710108 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" exitCode=1 Nov 25 20:42:41 crc kubenswrapper[4775]: I1125 20:42:41.710180 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerDied","Data":"734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1"} Nov 25 20:42:41 crc kubenswrapper[4775]: I1125 20:42:41.710472 4775 scope.go:117] "RemoveContainer" containerID="2bb888f4467884ff15b0a2948c850fd0c3b69a05f07245675de2c696b229504e" Nov 25 20:42:41 crc kubenswrapper[4775]: I1125 20:42:41.713301 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:42:41 crc kubenswrapper[4775]: E1125 20:42:41.713925 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:42:41 crc kubenswrapper[4775]: I1125 20:42:41.716039 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="bd572fa2a47f60affb8aab47a81fb2345abd456d668c494e908c2d429aef871f" exitCode=0 Nov 25 20:42:41 crc kubenswrapper[4775]: I1125 20:42:41.716101 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"bd572fa2a47f60affb8aab47a81fb2345abd456d668c494e908c2d429aef871f"} Nov 25 20:42:41 crc kubenswrapper[4775]: I1125 20:42:41.716134 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74"} Nov 25 20:42:41 crc kubenswrapper[4775]: I1125 20:42:41.782972 4775 scope.go:117] "RemoveContainer" containerID="07b2098f9e790cfcb410f681a649a373297ca33e0aa652c232439cee64e94685" Nov 25 20:42:43 crc kubenswrapper[4775]: I1125 20:42:43.105160 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:42:43 crc kubenswrapper[4775]: I1125 20:42:43.105535 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:42:43 crc kubenswrapper[4775]: I1125 20:42:43.105549 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:42:43 crc kubenswrapper[4775]: I1125 20:42:43.106321 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:42:43 crc kubenswrapper[4775]: E1125 20:42:43.106668 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:42:49 crc kubenswrapper[4775]: I1125 20:42:49.848132 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:42:49 crc kubenswrapper[4775]: E1125 20:42:49.849189 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:42:56 crc kubenswrapper[4775]: I1125 20:42:56.848314 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:42:56 crc kubenswrapper[4775]: E1125 20:42:56.849607 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:43:02 crc kubenswrapper[4775]: I1125 20:43:02.848347 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:43:02 crc kubenswrapper[4775]: E1125 20:43:02.849549 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:43:09 crc kubenswrapper[4775]: I1125 20:43:09.847513 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:43:09 crc kubenswrapper[4775]: E1125 20:43:09.848248 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:43:14 crc kubenswrapper[4775]: I1125 20:43:14.848364 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:43:14 crc kubenswrapper[4775]: E1125 20:43:14.849412 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:43:23 crc kubenswrapper[4775]: I1125 20:43:23.847470 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:43:23 crc kubenswrapper[4775]: E1125 20:43:23.848299 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:43:29 crc kubenswrapper[4775]: I1125 20:43:29.853773 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:43:29 crc kubenswrapper[4775]: E1125 20:43:29.854748 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:43:34 crc kubenswrapper[4775]: I1125 20:43:34.850386 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:43:34 crc kubenswrapper[4775]: E1125 20:43:34.851149 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:43:41 crc kubenswrapper[4775]: I1125 20:43:41.847360 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:43:41 crc kubenswrapper[4775]: E1125 20:43:41.848206 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:43:45 crc kubenswrapper[4775]: I1125 20:43:45.850145 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:43:45 crc kubenswrapper[4775]: E1125 20:43:45.851237 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:43:55 crc kubenswrapper[4775]: I1125 20:43:55.847588 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:43:55 crc kubenswrapper[4775]: E1125 20:43:55.848722 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:43:56 crc kubenswrapper[4775]: I1125 20:43:56.847255 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:43:56 crc kubenswrapper[4775]: E1125 20:43:56.848034 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:44:10 crc kubenswrapper[4775]: I1125 20:44:10.859029 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:44:10 crc kubenswrapper[4775]: E1125 20:44:10.859860 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:44:11 crc kubenswrapper[4775]: I1125 20:44:11.847018 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:44:11 crc kubenswrapper[4775]: E1125 20:44:11.847491 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:44:25 crc kubenswrapper[4775]: I1125 20:44:25.847597 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:44:25 crc kubenswrapper[4775]: E1125 20:44:25.849246 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:44:26 crc kubenswrapper[4775]: I1125 20:44:26.847234 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:44:26 crc kubenswrapper[4775]: E1125 20:44:26.848020 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:44:37 crc kubenswrapper[4775]: I1125 20:44:37.846818 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:44:37 crc kubenswrapper[4775]: E1125 20:44:37.847758 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:44:38 crc kubenswrapper[4775]: I1125 20:44:38.854588 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:44:38 crc kubenswrapper[4775]: E1125 20:44:38.855191 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:44:41 crc kubenswrapper[4775]: I1125 20:44:41.070334 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:44:41 crc kubenswrapper[4775]: I1125 20:44:41.071833 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:44:50 crc kubenswrapper[4775]: I1125 20:44:50.848218 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:44:50 crc kubenswrapper[4775]: E1125 20:44:50.849197 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:44:51 crc kubenswrapper[4775]: I1125 20:44:51.848121 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:44:51 crc kubenswrapper[4775]: E1125 20:44:51.848812 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.210143 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4"] Nov 25 20:45:00 crc kubenswrapper[4775]: E1125 20:45:00.211033 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26338132-cdca-4677-a600-9f5fbddf07eb" containerName="registry-server" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.211661 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="26338132-cdca-4677-a600-9f5fbddf07eb" containerName="registry-server" Nov 25 20:45:00 crc kubenswrapper[4775]: E1125 20:45:00.211683 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26338132-cdca-4677-a600-9f5fbddf07eb" containerName="extract-content" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.211691 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="26338132-cdca-4677-a600-9f5fbddf07eb" containerName="extract-content" Nov 25 20:45:00 crc kubenswrapper[4775]: E1125 20:45:00.211725 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26338132-cdca-4677-a600-9f5fbddf07eb" containerName="extract-utilities" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.211734 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="26338132-cdca-4677-a600-9f5fbddf07eb" containerName="extract-utilities" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.211977 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="26338132-cdca-4677-a600-9f5fbddf07eb" containerName="registry-server" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.212918 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.215811 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.216495 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.236194 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4"] Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.340780 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4j9d\" (UniqueName: \"kubernetes.io/projected/e9848839-2f8a-4b1d-b15c-e4d03858aa53-kube-api-access-b4j9d\") pod \"collect-profiles-29401725-wwll4\" (UID: \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.340874 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9848839-2f8a-4b1d-b15c-e4d03858aa53-secret-volume\") pod \"collect-profiles-29401725-wwll4\" (UID: \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.341023 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9848839-2f8a-4b1d-b15c-e4d03858aa53-config-volume\") pod \"collect-profiles-29401725-wwll4\" (UID: \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.443153 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9848839-2f8a-4b1d-b15c-e4d03858aa53-config-volume\") pod \"collect-profiles-29401725-wwll4\" (UID: \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.443382 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4j9d\" (UniqueName: \"kubernetes.io/projected/e9848839-2f8a-4b1d-b15c-e4d03858aa53-kube-api-access-b4j9d\") pod \"collect-profiles-29401725-wwll4\" (UID: \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.443939 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9848839-2f8a-4b1d-b15c-e4d03858aa53-secret-volume\") pod \"collect-profiles-29401725-wwll4\" (UID: \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.444122 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9848839-2f8a-4b1d-b15c-e4d03858aa53-config-volume\") pod \"collect-profiles-29401725-wwll4\" (UID: \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.499886 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9848839-2f8a-4b1d-b15c-e4d03858aa53-secret-volume\") pod \"collect-profiles-29401725-wwll4\" (UID: \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.500780 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4j9d\" (UniqueName: \"kubernetes.io/projected/e9848839-2f8a-4b1d-b15c-e4d03858aa53-kube-api-access-b4j9d\") pod \"collect-profiles-29401725-wwll4\" (UID: \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.537042 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" Nov 25 20:45:00 crc kubenswrapper[4775]: I1125 20:45:00.998829 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4"] Nov 25 20:45:01 crc kubenswrapper[4775]: W1125 20:45:01.012556 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9848839_2f8a_4b1d_b15c_e4d03858aa53.slice/crio-1fecfcfd1fac4272d20149746f9825d5793b27a0410e74fc2b59003f25890da5 WatchSource:0}: Error finding container 1fecfcfd1fac4272d20149746f9825d5793b27a0410e74fc2b59003f25890da5: Status 404 returned error can't find the container with id 1fecfcfd1fac4272d20149746f9825d5793b27a0410e74fc2b59003f25890da5 Nov 25 20:45:01 crc kubenswrapper[4775]: I1125 20:45:01.277595 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" event={"ID":"e9848839-2f8a-4b1d-b15c-e4d03858aa53","Type":"ContainerStarted","Data":"6dcbbec66f9226ea53f6c5d6f3612b89f3a5577e16210f52b5f1e537112684b8"} Nov 25 20:45:01 crc kubenswrapper[4775]: I1125 20:45:01.277661 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" event={"ID":"e9848839-2f8a-4b1d-b15c-e4d03858aa53","Type":"ContainerStarted","Data":"1fecfcfd1fac4272d20149746f9825d5793b27a0410e74fc2b59003f25890da5"} Nov 25 20:45:01 crc kubenswrapper[4775]: I1125 20:45:01.294470 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" podStartSLOduration=1.294452471 podStartE2EDuration="1.294452471s" podCreationTimestamp="2025-11-25 20:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:01.291135833 +0000 UTC m=+4283.207498199" watchObservedRunningTime="2025-11-25 20:45:01.294452471 +0000 UTC m=+4283.210814837" Nov 25 20:45:02 crc kubenswrapper[4775]: I1125 20:45:02.292566 4775 generic.go:334] "Generic (PLEG): container finished" podID="e9848839-2f8a-4b1d-b15c-e4d03858aa53" containerID="6dcbbec66f9226ea53f6c5d6f3612b89f3a5577e16210f52b5f1e537112684b8" exitCode=0 Nov 25 20:45:02 crc kubenswrapper[4775]: I1125 20:45:02.292939 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" event={"ID":"e9848839-2f8a-4b1d-b15c-e4d03858aa53","Type":"ContainerDied","Data":"6dcbbec66f9226ea53f6c5d6f3612b89f3a5577e16210f52b5f1e537112684b8"} Nov 25 20:45:02 crc kubenswrapper[4775]: I1125 20:45:02.847538 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:45:02 crc kubenswrapper[4775]: E1125 20:45:02.848420 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:45:03 crc kubenswrapper[4775]: I1125 20:45:03.689157 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" Nov 25 20:45:03 crc kubenswrapper[4775]: I1125 20:45:03.812523 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4j9d\" (UniqueName: \"kubernetes.io/projected/e9848839-2f8a-4b1d-b15c-e4d03858aa53-kube-api-access-b4j9d\") pod \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\" (UID: \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\") " Nov 25 20:45:03 crc kubenswrapper[4775]: I1125 20:45:03.812632 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9848839-2f8a-4b1d-b15c-e4d03858aa53-secret-volume\") pod \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\" (UID: \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\") " Nov 25 20:45:03 crc kubenswrapper[4775]: I1125 20:45:03.812727 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9848839-2f8a-4b1d-b15c-e4d03858aa53-config-volume\") pod \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\" (UID: \"e9848839-2f8a-4b1d-b15c-e4d03858aa53\") " Nov 25 20:45:03 crc kubenswrapper[4775]: I1125 20:45:03.813360 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9848839-2f8a-4b1d-b15c-e4d03858aa53-config-volume" (OuterVolumeSpecName: "config-volume") pod "e9848839-2f8a-4b1d-b15c-e4d03858aa53" (UID: "e9848839-2f8a-4b1d-b15c-e4d03858aa53"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:03 crc kubenswrapper[4775]: I1125 20:45:03.813681 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9848839-2f8a-4b1d-b15c-e4d03858aa53-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:03 crc kubenswrapper[4775]: I1125 20:45:03.818101 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9848839-2f8a-4b1d-b15c-e4d03858aa53-kube-api-access-b4j9d" (OuterVolumeSpecName: "kube-api-access-b4j9d") pod "e9848839-2f8a-4b1d-b15c-e4d03858aa53" (UID: "e9848839-2f8a-4b1d-b15c-e4d03858aa53"). InnerVolumeSpecName "kube-api-access-b4j9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:03 crc kubenswrapper[4775]: I1125 20:45:03.818675 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9848839-2f8a-4b1d-b15c-e4d03858aa53-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e9848839-2f8a-4b1d-b15c-e4d03858aa53" (UID: "e9848839-2f8a-4b1d-b15c-e4d03858aa53"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:03 crc kubenswrapper[4775]: I1125 20:45:03.847338 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:45:03 crc kubenswrapper[4775]: E1125 20:45:03.847972 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:45:03 crc kubenswrapper[4775]: I1125 20:45:03.917419 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4j9d\" (UniqueName: \"kubernetes.io/projected/e9848839-2f8a-4b1d-b15c-e4d03858aa53-kube-api-access-b4j9d\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:03 crc kubenswrapper[4775]: I1125 20:45:03.918036 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9848839-2f8a-4b1d-b15c-e4d03858aa53-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:04 crc kubenswrapper[4775]: I1125 20:45:04.312293 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" event={"ID":"e9848839-2f8a-4b1d-b15c-e4d03858aa53","Type":"ContainerDied","Data":"1fecfcfd1fac4272d20149746f9825d5793b27a0410e74fc2b59003f25890da5"} Nov 25 20:45:04 crc kubenswrapper[4775]: I1125 20:45:04.312586 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fecfcfd1fac4272d20149746f9825d5793b27a0410e74fc2b59003f25890da5" Nov 25 20:45:04 crc kubenswrapper[4775]: I1125 20:45:04.312368 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-wwll4" Nov 25 20:45:04 crc kubenswrapper[4775]: I1125 20:45:04.392772 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2"] Nov 25 20:45:04 crc kubenswrapper[4775]: I1125 20:45:04.405036 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401680-btws2"] Nov 25 20:45:04 crc kubenswrapper[4775]: I1125 20:45:04.865842 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ced7361-4485-43e8-b942-4417fb168b44" path="/var/lib/kubelet/pods/0ced7361-4485-43e8-b942-4417fb168b44/volumes" Nov 25 20:45:11 crc kubenswrapper[4775]: I1125 20:45:11.069926 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:45:11 crc kubenswrapper[4775]: I1125 20:45:11.072048 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:45:15 crc kubenswrapper[4775]: I1125 20:45:15.849022 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:45:15 crc kubenswrapper[4775]: E1125 20:45:15.850192 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:45:17 crc kubenswrapper[4775]: I1125 20:45:17.847804 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:45:17 crc kubenswrapper[4775]: E1125 20:45:17.848403 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:45:26 crc kubenswrapper[4775]: I1125 20:45:26.848072 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:45:26 crc kubenswrapper[4775]: E1125 20:45:26.849550 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:45:30 crc kubenswrapper[4775]: I1125 20:45:30.847551 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:45:30 crc kubenswrapper[4775]: E1125 20:45:30.850171 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:45:39 crc kubenswrapper[4775]: I1125 20:45:39.846730 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:45:39 crc kubenswrapper[4775]: E1125 20:45:39.847496 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:45:41 crc kubenswrapper[4775]: I1125 20:45:41.070126 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:45:41 crc kubenswrapper[4775]: I1125 20:45:41.070545 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:45:41 crc kubenswrapper[4775]: I1125 20:45:41.070609 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 20:45:41 crc kubenswrapper[4775]: I1125 20:45:41.071390 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:45:41 crc kubenswrapper[4775]: I1125 20:45:41.071480 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" gracePeriod=600 Nov 25 20:45:41 crc kubenswrapper[4775]: E1125 20:45:41.332316 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:45:41 crc kubenswrapper[4775]: I1125 20:45:41.692825 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" exitCode=0 Nov 25 20:45:41 crc kubenswrapper[4775]: I1125 20:45:41.692852 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74"} Nov 25 20:45:41 crc kubenswrapper[4775]: I1125 20:45:41.692947 4775 scope.go:117] "RemoveContainer" containerID="bd572fa2a47f60affb8aab47a81fb2345abd456d668c494e908c2d429aef871f" Nov 25 20:45:41 crc kubenswrapper[4775]: I1125 20:45:41.694084 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:45:41 crc kubenswrapper[4775]: E1125 20:45:41.694686 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:45:43 crc kubenswrapper[4775]: I1125 20:45:43.847908 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:45:43 crc kubenswrapper[4775]: E1125 20:45:43.848596 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:45:53 crc kubenswrapper[4775]: I1125 20:45:53.847718 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:45:53 crc kubenswrapper[4775]: E1125 20:45:53.848620 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:45:54 crc kubenswrapper[4775]: I1125 20:45:54.166372 4775 scope.go:117] "RemoveContainer" containerID="bdf304c98187bfb77c74220ebd64d0c4713fe3fd2acfaf78ced4411b6e900c50" Nov 25 20:45:55 crc kubenswrapper[4775]: I1125 20:45:55.847220 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:45:55 crc kubenswrapper[4775]: I1125 20:45:55.847608 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:45:55 crc kubenswrapper[4775]: E1125 20:45:55.848062 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:45:55 crc kubenswrapper[4775]: E1125 20:45:55.848147 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:46:06 crc kubenswrapper[4775]: I1125 20:46:06.848155 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:46:06 crc kubenswrapper[4775]: E1125 20:46:06.849370 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:46:08 crc kubenswrapper[4775]: I1125 20:46:08.859795 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:46:08 crc kubenswrapper[4775]: E1125 20:46:08.860954 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.379171 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5vh4n"] Nov 25 20:46:09 crc kubenswrapper[4775]: E1125 20:46:09.380374 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9848839-2f8a-4b1d-b15c-e4d03858aa53" containerName="collect-profiles" Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.380479 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9848839-2f8a-4b1d-b15c-e4d03858aa53" containerName="collect-profiles" Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.380831 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9848839-2f8a-4b1d-b15c-e4d03858aa53" containerName="collect-profiles" Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.382668 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.400250 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vh4n"] Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.497217 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-utilities\") pod \"certified-operators-5vh4n\" (UID: \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\") " pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.497460 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv7gk\" (UniqueName: \"kubernetes.io/projected/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-kube-api-access-qv7gk\") pod \"certified-operators-5vh4n\" (UID: \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\") " pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.497593 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-catalog-content\") pod \"certified-operators-5vh4n\" (UID: \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\") " pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.599871 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv7gk\" (UniqueName: \"kubernetes.io/projected/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-kube-api-access-qv7gk\") pod \"certified-operators-5vh4n\" (UID: \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\") " pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.600038 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-catalog-content\") pod \"certified-operators-5vh4n\" (UID: \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\") " pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.600100 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-utilities\") pod \"certified-operators-5vh4n\" (UID: \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\") " pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.600755 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-utilities\") pod \"certified-operators-5vh4n\" (UID: \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\") " pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.600871 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-catalog-content\") pod \"certified-operators-5vh4n\" (UID: \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\") " pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:09 crc kubenswrapper[4775]: I1125 20:46:09.717430 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv7gk\" (UniqueName: \"kubernetes.io/projected/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-kube-api-access-qv7gk\") pod \"certified-operators-5vh4n\" (UID: \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\") " pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:10 crc kubenswrapper[4775]: I1125 20:46:10.006239 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:10 crc kubenswrapper[4775]: I1125 20:46:10.630992 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vh4n"] Nov 25 20:46:10 crc kubenswrapper[4775]: I1125 20:46:10.847596 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:46:10 crc kubenswrapper[4775]: E1125 20:46:10.848357 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:46:11 crc kubenswrapper[4775]: I1125 20:46:11.021730 4775 generic.go:334] "Generic (PLEG): container finished" podID="964232df-7cf9-4d94-ad1d-ed1bbc9c6619" containerID="508a66dc5ebc56f2449da2d6682baf2833da0dc2064c82e08e1a2b8260852c6d" exitCode=0 Nov 25 20:46:11 crc kubenswrapper[4775]: I1125 20:46:11.021774 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vh4n" event={"ID":"964232df-7cf9-4d94-ad1d-ed1bbc9c6619","Type":"ContainerDied","Data":"508a66dc5ebc56f2449da2d6682baf2833da0dc2064c82e08e1a2b8260852c6d"} Nov 25 20:46:11 crc kubenswrapper[4775]: I1125 20:46:11.021800 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vh4n" event={"ID":"964232df-7cf9-4d94-ad1d-ed1bbc9c6619","Type":"ContainerStarted","Data":"abdf14e420b2f0daf4de3910c77dd04c8ca2c25ee5ab2cbdbf837fc9af2e43ef"} Nov 25 20:46:11 crc kubenswrapper[4775]: I1125 20:46:11.023857 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:46:13 crc kubenswrapper[4775]: I1125 20:46:13.043361 4775 generic.go:334] "Generic (PLEG): container finished" podID="964232df-7cf9-4d94-ad1d-ed1bbc9c6619" containerID="cc41d7f6615f47bd7969d7f98c645d67870190e2033d7d08ef4ffa1ea1677f0b" exitCode=0 Nov 25 20:46:13 crc kubenswrapper[4775]: I1125 20:46:13.043439 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vh4n" event={"ID":"964232df-7cf9-4d94-ad1d-ed1bbc9c6619","Type":"ContainerDied","Data":"cc41d7f6615f47bd7969d7f98c645d67870190e2033d7d08ef4ffa1ea1677f0b"} Nov 25 20:46:14 crc kubenswrapper[4775]: I1125 20:46:14.077370 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vh4n" event={"ID":"964232df-7cf9-4d94-ad1d-ed1bbc9c6619","Type":"ContainerStarted","Data":"283a201f1d402fa4a84502b925258040ceb55558d58d3cc38a56626b92074243"} Nov 25 20:46:14 crc kubenswrapper[4775]: I1125 20:46:14.114403 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5vh4n" podStartSLOduration=2.674925461 podStartE2EDuration="5.11437635s" podCreationTimestamp="2025-11-25 20:46:09 +0000 UTC" firstStartedPulling="2025-11-25 20:46:11.023572783 +0000 UTC m=+4352.939935149" lastFinishedPulling="2025-11-25 20:46:13.463023672 +0000 UTC m=+4355.379386038" observedRunningTime="2025-11-25 20:46:14.098226569 +0000 UTC m=+4356.014588945" watchObservedRunningTime="2025-11-25 20:46:14.11437635 +0000 UTC m=+4356.030738726" Nov 25 20:46:18 crc kubenswrapper[4775]: I1125 20:46:18.867311 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:46:18 crc kubenswrapper[4775]: E1125 20:46:18.868035 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:46:20 crc kubenswrapper[4775]: I1125 20:46:20.007788 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:20 crc kubenswrapper[4775]: I1125 20:46:20.008148 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:20 crc kubenswrapper[4775]: I1125 20:46:20.088338 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:20 crc kubenswrapper[4775]: I1125 20:46:20.228352 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:20 crc kubenswrapper[4775]: I1125 20:46:20.339133 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5vh4n"] Nov 25 20:46:21 crc kubenswrapper[4775]: I1125 20:46:21.847397 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:46:21 crc kubenswrapper[4775]: E1125 20:46:21.848369 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:46:22 crc kubenswrapper[4775]: I1125 20:46:22.176566 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5vh4n" podUID="964232df-7cf9-4d94-ad1d-ed1bbc9c6619" containerName="registry-server" containerID="cri-o://283a201f1d402fa4a84502b925258040ceb55558d58d3cc38a56626b92074243" gracePeriod=2 Nov 25 20:46:22 crc kubenswrapper[4775]: I1125 20:46:22.741532 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:22 crc kubenswrapper[4775]: I1125 20:46:22.777230 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv7gk\" (UniqueName: \"kubernetes.io/projected/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-kube-api-access-qv7gk\") pod \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\" (UID: \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\") " Nov 25 20:46:22 crc kubenswrapper[4775]: I1125 20:46:22.777355 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-catalog-content\") pod \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\" (UID: \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\") " Nov 25 20:46:22 crc kubenswrapper[4775]: I1125 20:46:22.777430 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-utilities\") pod \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\" (UID: \"964232df-7cf9-4d94-ad1d-ed1bbc9c6619\") " Nov 25 20:46:22 crc kubenswrapper[4775]: I1125 20:46:22.778944 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-utilities" (OuterVolumeSpecName: "utilities") pod "964232df-7cf9-4d94-ad1d-ed1bbc9c6619" (UID: "964232df-7cf9-4d94-ad1d-ed1bbc9c6619"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:46:22 crc kubenswrapper[4775]: I1125 20:46:22.798130 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-kube-api-access-qv7gk" (OuterVolumeSpecName: "kube-api-access-qv7gk") pod "964232df-7cf9-4d94-ad1d-ed1bbc9c6619" (UID: "964232df-7cf9-4d94-ad1d-ed1bbc9c6619"). InnerVolumeSpecName "kube-api-access-qv7gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:22 crc kubenswrapper[4775]: I1125 20:46:22.851146 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "964232df-7cf9-4d94-ad1d-ed1bbc9c6619" (UID: "964232df-7cf9-4d94-ad1d-ed1bbc9c6619"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:46:22 crc kubenswrapper[4775]: I1125 20:46:22.880740 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:22 crc kubenswrapper[4775]: I1125 20:46:22.880806 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv7gk\" (UniqueName: \"kubernetes.io/projected/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-kube-api-access-qv7gk\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:22 crc kubenswrapper[4775]: I1125 20:46:22.880834 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/964232df-7cf9-4d94-ad1d-ed1bbc9c6619-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.193081 4775 generic.go:334] "Generic (PLEG): container finished" podID="964232df-7cf9-4d94-ad1d-ed1bbc9c6619" containerID="283a201f1d402fa4a84502b925258040ceb55558d58d3cc38a56626b92074243" exitCode=0 Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.193381 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vh4n" Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.193345 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vh4n" event={"ID":"964232df-7cf9-4d94-ad1d-ed1bbc9c6619","Type":"ContainerDied","Data":"283a201f1d402fa4a84502b925258040ceb55558d58d3cc38a56626b92074243"} Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.193735 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vh4n" event={"ID":"964232df-7cf9-4d94-ad1d-ed1bbc9c6619","Type":"ContainerDied","Data":"abdf14e420b2f0daf4de3910c77dd04c8ca2c25ee5ab2cbdbf837fc9af2e43ef"} Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.193779 4775 scope.go:117] "RemoveContainer" containerID="283a201f1d402fa4a84502b925258040ceb55558d58d3cc38a56626b92074243" Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.236126 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5vh4n"] Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.236751 4775 scope.go:117] "RemoveContainer" containerID="cc41d7f6615f47bd7969d7f98c645d67870190e2033d7d08ef4ffa1ea1677f0b" Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.248288 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5vh4n"] Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.273495 4775 scope.go:117] "RemoveContainer" containerID="508a66dc5ebc56f2449da2d6682baf2833da0dc2064c82e08e1a2b8260852c6d" Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.339287 4775 scope.go:117] "RemoveContainer" containerID="283a201f1d402fa4a84502b925258040ceb55558d58d3cc38a56626b92074243" Nov 25 20:46:23 crc kubenswrapper[4775]: E1125 20:46:23.340088 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"283a201f1d402fa4a84502b925258040ceb55558d58d3cc38a56626b92074243\": container with ID starting with 283a201f1d402fa4a84502b925258040ceb55558d58d3cc38a56626b92074243 not found: ID does not exist" containerID="283a201f1d402fa4a84502b925258040ceb55558d58d3cc38a56626b92074243" Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.340160 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283a201f1d402fa4a84502b925258040ceb55558d58d3cc38a56626b92074243"} err="failed to get container status \"283a201f1d402fa4a84502b925258040ceb55558d58d3cc38a56626b92074243\": rpc error: code = NotFound desc = could not find container \"283a201f1d402fa4a84502b925258040ceb55558d58d3cc38a56626b92074243\": container with ID starting with 283a201f1d402fa4a84502b925258040ceb55558d58d3cc38a56626b92074243 not found: ID does not exist" Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.340203 4775 scope.go:117] "RemoveContainer" containerID="cc41d7f6615f47bd7969d7f98c645d67870190e2033d7d08ef4ffa1ea1677f0b" Nov 25 20:46:23 crc kubenswrapper[4775]: E1125 20:46:23.340960 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc41d7f6615f47bd7969d7f98c645d67870190e2033d7d08ef4ffa1ea1677f0b\": container with ID starting with cc41d7f6615f47bd7969d7f98c645d67870190e2033d7d08ef4ffa1ea1677f0b not found: ID does not exist" containerID="cc41d7f6615f47bd7969d7f98c645d67870190e2033d7d08ef4ffa1ea1677f0b" Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.340997 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc41d7f6615f47bd7969d7f98c645d67870190e2033d7d08ef4ffa1ea1677f0b"} err="failed to get container status \"cc41d7f6615f47bd7969d7f98c645d67870190e2033d7d08ef4ffa1ea1677f0b\": rpc error: code = NotFound desc = could not find container \"cc41d7f6615f47bd7969d7f98c645d67870190e2033d7d08ef4ffa1ea1677f0b\": container with ID starting with cc41d7f6615f47bd7969d7f98c645d67870190e2033d7d08ef4ffa1ea1677f0b not found: ID does not exist" Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.341019 4775 scope.go:117] "RemoveContainer" containerID="508a66dc5ebc56f2449da2d6682baf2833da0dc2064c82e08e1a2b8260852c6d" Nov 25 20:46:23 crc kubenswrapper[4775]: E1125 20:46:23.341372 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"508a66dc5ebc56f2449da2d6682baf2833da0dc2064c82e08e1a2b8260852c6d\": container with ID starting with 508a66dc5ebc56f2449da2d6682baf2833da0dc2064c82e08e1a2b8260852c6d not found: ID does not exist" containerID="508a66dc5ebc56f2449da2d6682baf2833da0dc2064c82e08e1a2b8260852c6d" Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.341442 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"508a66dc5ebc56f2449da2d6682baf2833da0dc2064c82e08e1a2b8260852c6d"} err="failed to get container status \"508a66dc5ebc56f2449da2d6682baf2833da0dc2064c82e08e1a2b8260852c6d\": rpc error: code = NotFound desc = could not find container \"508a66dc5ebc56f2449da2d6682baf2833da0dc2064c82e08e1a2b8260852c6d\": container with ID starting with 508a66dc5ebc56f2449da2d6682baf2833da0dc2064c82e08e1a2b8260852c6d not found: ID does not exist" Nov 25 20:46:23 crc kubenswrapper[4775]: I1125 20:46:23.847224 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:46:23 crc kubenswrapper[4775]: E1125 20:46:23.847920 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:46:24 crc kubenswrapper[4775]: I1125 20:46:24.869132 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="964232df-7cf9-4d94-ad1d-ed1bbc9c6619" path="/var/lib/kubelet/pods/964232df-7cf9-4d94-ad1d-ed1bbc9c6619/volumes" Nov 25 20:46:29 crc kubenswrapper[4775]: I1125 20:46:29.847281 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:46:29 crc kubenswrapper[4775]: E1125 20:46:29.848193 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:46:34 crc kubenswrapper[4775]: I1125 20:46:34.847717 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:46:34 crc kubenswrapper[4775]: E1125 20:46:34.848698 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:46:35 crc kubenswrapper[4775]: I1125 20:46:35.847709 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:46:35 crc kubenswrapper[4775]: E1125 20:46:35.848472 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:46:44 crc kubenswrapper[4775]: I1125 20:46:44.848146 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:46:44 crc kubenswrapper[4775]: E1125 20:46:44.849189 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:46:47 crc kubenswrapper[4775]: I1125 20:46:47.847283 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:46:48 crc kubenswrapper[4775]: I1125 20:46:48.464766 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"d8f3c7943b70e882f8509048aa4e15ee16a408b04e7fa9d9b0806cfd2f3af816"} Nov 25 20:46:48 crc kubenswrapper[4775]: I1125 20:46:48.465304 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:46:50 crc kubenswrapper[4775]: I1125 20:46:50.847319 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:46:50 crc kubenswrapper[4775]: E1125 20:46:50.848153 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:46:56 crc kubenswrapper[4775]: I1125 20:46:56.847805 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:46:56 crc kubenswrapper[4775]: E1125 20:46:56.849051 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:47:03 crc kubenswrapper[4775]: I1125 20:47:03.204242 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:47:03 crc kubenswrapper[4775]: I1125 20:47:03.223962 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:47:05 crc kubenswrapper[4775]: I1125 20:47:05.847557 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:47:05 crc kubenswrapper[4775]: E1125 20:47:05.848311 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:47:10 crc kubenswrapper[4775]: I1125 20:47:10.847493 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:47:10 crc kubenswrapper[4775]: E1125 20:47:10.848813 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:47:13 crc kubenswrapper[4775]: I1125 20:47:13.129195 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:47:13 crc kubenswrapper[4775]: I1125 20:47:13.180669 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:47:16 crc kubenswrapper[4775]: I1125 20:47:16.847256 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:47:16 crc kubenswrapper[4775]: E1125 20:47:16.848140 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:47:22 crc kubenswrapper[4775]: I1125 20:47:22.203514 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:47:22 crc kubenswrapper[4775]: I1125 20:47:22.203578 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:47:22 crc kubenswrapper[4775]: I1125 20:47:22.204301 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:47:22 crc kubenswrapper[4775]: I1125 20:47:22.205345 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"d8f3c7943b70e882f8509048aa4e15ee16a408b04e7fa9d9b0806cfd2f3af816"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:47:22 crc kubenswrapper[4775]: I1125 20:47:22.205397 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://d8f3c7943b70e882f8509048aa4e15ee16a408b04e7fa9d9b0806cfd2f3af816" gracePeriod=30 Nov 25 20:47:22 crc kubenswrapper[4775]: I1125 20:47:22.210558 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="Get \"https://10.217.0.245:8786/healthcheck\": EOF" Nov 25 20:47:23 crc kubenswrapper[4775]: I1125 20:47:23.847675 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:47:23 crc kubenswrapper[4775]: E1125 20:47:23.848477 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:47:25 crc kubenswrapper[4775]: I1125 20:47:25.880319 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="d8f3c7943b70e882f8509048aa4e15ee16a408b04e7fa9d9b0806cfd2f3af816" exitCode=0 Nov 25 20:47:25 crc kubenswrapper[4775]: I1125 20:47:25.880369 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"d8f3c7943b70e882f8509048aa4e15ee16a408b04e7fa9d9b0806cfd2f3af816"} Nov 25 20:47:25 crc kubenswrapper[4775]: I1125 20:47:25.884196 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e"} Nov 25 20:47:25 crc kubenswrapper[4775]: I1125 20:47:25.885060 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:47:25 crc kubenswrapper[4775]: I1125 20:47:25.884245 4775 scope.go:117] "RemoveContainer" containerID="cbd0c87f58738f742a8f7c93cb10b36f84f9f843849ba91d851f8a415c4c17fb" Nov 25 20:47:30 crc kubenswrapper[4775]: I1125 20:47:30.847411 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:47:30 crc kubenswrapper[4775]: E1125 20:47:30.849494 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:47:35 crc kubenswrapper[4775]: I1125 20:47:35.847869 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:47:35 crc kubenswrapper[4775]: E1125 20:47:35.848947 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:47:42 crc kubenswrapper[4775]: I1125 20:47:42.849223 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:47:42 crc kubenswrapper[4775]: E1125 20:47:42.849875 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:47:43 crc kubenswrapper[4775]: I1125 20:47:43.091550 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:47:43 crc kubenswrapper[4775]: I1125 20:47:43.143706 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:47:46 crc kubenswrapper[4775]: I1125 20:47:46.846946 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:47:48 crc kubenswrapper[4775]: I1125 20:47:48.163619 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9"} Nov 25 20:47:50 crc kubenswrapper[4775]: I1125 20:47:50.189967 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" exitCode=1 Nov 25 20:47:50 crc kubenswrapper[4775]: I1125 20:47:50.190059 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerDied","Data":"8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9"} Nov 25 20:47:50 crc kubenswrapper[4775]: I1125 20:47:50.190454 4775 scope.go:117] "RemoveContainer" containerID="734ac494266f1b421ea4c5da45daf3611f2ab28e1fae90fc04cdef9f49646cb1" Nov 25 20:47:50 crc kubenswrapper[4775]: I1125 20:47:50.191333 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:47:50 crc kubenswrapper[4775]: E1125 20:47:50.191863 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:47:53 crc kubenswrapper[4775]: I1125 20:47:53.104859 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:47:53 crc kubenswrapper[4775]: I1125 20:47:53.105473 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:47:53 crc kubenswrapper[4775]: I1125 20:47:53.106447 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:47:53 crc kubenswrapper[4775]: E1125 20:47:53.106939 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:47:53 crc kubenswrapper[4775]: I1125 20:47:53.132622 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:47:53 crc kubenswrapper[4775]: I1125 20:47:53.165543 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:47:57 crc kubenswrapper[4775]: I1125 20:47:57.847717 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:47:57 crc kubenswrapper[4775]: E1125 20:47:57.848628 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:48:02 crc kubenswrapper[4775]: I1125 20:48:02.207812 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:48:02 crc kubenswrapper[4775]: I1125 20:48:02.213729 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:48:02 crc kubenswrapper[4775]: I1125 20:48:02.213840 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:48:02 crc kubenswrapper[4775]: I1125 20:48:02.214912 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:48:02 crc kubenswrapper[4775]: I1125 20:48:02.215024 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" gracePeriod=30 Nov 25 20:48:02 crc kubenswrapper[4775]: I1125 20:48:02.226999 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="Get \"https://10.217.0.245:8786/healthcheck\": EOF" Nov 25 20:48:03 crc kubenswrapper[4775]: I1125 20:48:03.105073 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:48:03 crc kubenswrapper[4775]: I1125 20:48:03.107333 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:48:03 crc kubenswrapper[4775]: E1125 20:48:03.108107 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:48:05 crc kubenswrapper[4775]: E1125 20:48:05.463466 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:48:06 crc kubenswrapper[4775]: I1125 20:48:06.361417 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" exitCode=0 Nov 25 20:48:06 crc kubenswrapper[4775]: I1125 20:48:06.361484 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e"} Nov 25 20:48:06 crc kubenswrapper[4775]: I1125 20:48:06.361849 4775 scope.go:117] "RemoveContainer" containerID="d8f3c7943b70e882f8509048aa4e15ee16a408b04e7fa9d9b0806cfd2f3af816" Nov 25 20:48:06 crc kubenswrapper[4775]: I1125 20:48:06.362994 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:48:06 crc kubenswrapper[4775]: E1125 20:48:06.363486 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:48:12 crc kubenswrapper[4775]: I1125 20:48:12.847456 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:48:12 crc kubenswrapper[4775]: E1125 20:48:12.848501 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:48:16 crc kubenswrapper[4775]: I1125 20:48:16.848575 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:48:16 crc kubenswrapper[4775]: E1125 20:48:16.851067 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:48:18 crc kubenswrapper[4775]: I1125 20:48:18.862027 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:48:18 crc kubenswrapper[4775]: E1125 20:48:18.862393 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:48:25 crc kubenswrapper[4775]: I1125 20:48:25.847974 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:48:25 crc kubenswrapper[4775]: E1125 20:48:25.848919 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:48:27 crc kubenswrapper[4775]: I1125 20:48:27.848482 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:48:27 crc kubenswrapper[4775]: E1125 20:48:27.849155 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:48:33 crc kubenswrapper[4775]: I1125 20:48:33.848518 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:48:33 crc kubenswrapper[4775]: E1125 20:48:33.849348 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:48:38 crc kubenswrapper[4775]: I1125 20:48:38.851917 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:48:38 crc kubenswrapper[4775]: E1125 20:48:38.853114 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:48:38 crc kubenswrapper[4775]: I1125 20:48:38.867409 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:48:38 crc kubenswrapper[4775]: E1125 20:48:38.868486 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:48:44 crc kubenswrapper[4775]: I1125 20:48:44.847062 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:48:44 crc kubenswrapper[4775]: E1125 20:48:44.848064 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:48:51 crc kubenswrapper[4775]: I1125 20:48:51.846577 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:48:51 crc kubenswrapper[4775]: E1125 20:48:51.847530 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:48:52 crc kubenswrapper[4775]: I1125 20:48:52.851858 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:48:52 crc kubenswrapper[4775]: E1125 20:48:52.852141 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:48:58 crc kubenswrapper[4775]: I1125 20:48:58.853452 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:48:58 crc kubenswrapper[4775]: E1125 20:48:58.854295 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:49:04 crc kubenswrapper[4775]: I1125 20:49:04.852763 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:49:04 crc kubenswrapper[4775]: E1125 20:49:04.853730 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:49:04 crc kubenswrapper[4775]: I1125 20:49:04.856788 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:49:04 crc kubenswrapper[4775]: E1125 20:49:04.857092 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:49:10 crc kubenswrapper[4775]: I1125 20:49:10.847405 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:49:10 crc kubenswrapper[4775]: E1125 20:49:10.848569 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:49:16 crc kubenswrapper[4775]: I1125 20:49:16.847950 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:49:16 crc kubenswrapper[4775]: E1125 20:49:16.849097 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:49:18 crc kubenswrapper[4775]: I1125 20:49:18.860693 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:49:18 crc kubenswrapper[4775]: E1125 20:49:18.862375 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:49:25 crc kubenswrapper[4775]: I1125 20:49:25.848379 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:49:25 crc kubenswrapper[4775]: E1125 20:49:25.849400 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:49:30 crc kubenswrapper[4775]: I1125 20:49:30.847441 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:49:30 crc kubenswrapper[4775]: E1125 20:49:30.848411 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:49:32 crc kubenswrapper[4775]: I1125 20:49:32.848461 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:49:32 crc kubenswrapper[4775]: E1125 20:49:32.849598 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:49:39 crc kubenswrapper[4775]: I1125 20:49:39.847485 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:49:39 crc kubenswrapper[4775]: E1125 20:49:39.848509 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:49:45 crc kubenswrapper[4775]: I1125 20:49:45.847025 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:49:45 crc kubenswrapper[4775]: E1125 20:49:45.847717 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:49:47 crc kubenswrapper[4775]: I1125 20:49:47.846909 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:49:47 crc kubenswrapper[4775]: E1125 20:49:47.847417 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:49:52 crc kubenswrapper[4775]: I1125 20:49:52.847536 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:49:52 crc kubenswrapper[4775]: E1125 20:49:52.848689 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.620330 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-87kxf"] Nov 25 20:49:54 crc kubenswrapper[4775]: E1125 20:49:54.620721 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964232df-7cf9-4d94-ad1d-ed1bbc9c6619" containerName="extract-content" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.620733 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="964232df-7cf9-4d94-ad1d-ed1bbc9c6619" containerName="extract-content" Nov 25 20:49:54 crc kubenswrapper[4775]: E1125 20:49:54.620753 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964232df-7cf9-4d94-ad1d-ed1bbc9c6619" containerName="extract-utilities" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.620760 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="964232df-7cf9-4d94-ad1d-ed1bbc9c6619" containerName="extract-utilities" Nov 25 20:49:54 crc kubenswrapper[4775]: E1125 20:49:54.620776 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964232df-7cf9-4d94-ad1d-ed1bbc9c6619" containerName="registry-server" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.620782 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="964232df-7cf9-4d94-ad1d-ed1bbc9c6619" containerName="registry-server" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.620991 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="964232df-7cf9-4d94-ad1d-ed1bbc9c6619" containerName="registry-server" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.622343 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.632551 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b11890d-23f8-421e-9647-0abace57f5b1-utilities\") pod \"redhat-marketplace-87kxf\" (UID: \"1b11890d-23f8-421e-9647-0abace57f5b1\") " pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.632680 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b11890d-23f8-421e-9647-0abace57f5b1-catalog-content\") pod \"redhat-marketplace-87kxf\" (UID: \"1b11890d-23f8-421e-9647-0abace57f5b1\") " pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.632946 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66btq\" (UniqueName: \"kubernetes.io/projected/1b11890d-23f8-421e-9647-0abace57f5b1-kube-api-access-66btq\") pod \"redhat-marketplace-87kxf\" (UID: \"1b11890d-23f8-421e-9647-0abace57f5b1\") " pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.638849 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-87kxf"] Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.735293 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66btq\" (UniqueName: \"kubernetes.io/projected/1b11890d-23f8-421e-9647-0abace57f5b1-kube-api-access-66btq\") pod \"redhat-marketplace-87kxf\" (UID: \"1b11890d-23f8-421e-9647-0abace57f5b1\") " pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.735735 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b11890d-23f8-421e-9647-0abace57f5b1-utilities\") pod \"redhat-marketplace-87kxf\" (UID: \"1b11890d-23f8-421e-9647-0abace57f5b1\") " pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.735765 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b11890d-23f8-421e-9647-0abace57f5b1-catalog-content\") pod \"redhat-marketplace-87kxf\" (UID: \"1b11890d-23f8-421e-9647-0abace57f5b1\") " pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.736282 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b11890d-23f8-421e-9647-0abace57f5b1-utilities\") pod \"redhat-marketplace-87kxf\" (UID: \"1b11890d-23f8-421e-9647-0abace57f5b1\") " pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.736341 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b11890d-23f8-421e-9647-0abace57f5b1-catalog-content\") pod \"redhat-marketplace-87kxf\" (UID: \"1b11890d-23f8-421e-9647-0abace57f5b1\") " pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.762830 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66btq\" (UniqueName: \"kubernetes.io/projected/1b11890d-23f8-421e-9647-0abace57f5b1-kube-api-access-66btq\") pod \"redhat-marketplace-87kxf\" (UID: \"1b11890d-23f8-421e-9647-0abace57f5b1\") " pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:49:54 crc kubenswrapper[4775]: I1125 20:49:54.961454 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:49:55 crc kubenswrapper[4775]: I1125 20:49:55.457232 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-87kxf"] Nov 25 20:49:55 crc kubenswrapper[4775]: W1125 20:49:55.465237 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b11890d_23f8_421e_9647_0abace57f5b1.slice/crio-627ff0108b48e746e5765be7f21700bc090380f3092e6c96a3d2434e653f75bd WatchSource:0}: Error finding container 627ff0108b48e746e5765be7f21700bc090380f3092e6c96a3d2434e653f75bd: Status 404 returned error can't find the container with id 627ff0108b48e746e5765be7f21700bc090380f3092e6c96a3d2434e653f75bd Nov 25 20:49:55 crc kubenswrapper[4775]: I1125 20:49:55.640349 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87kxf" event={"ID":"1b11890d-23f8-421e-9647-0abace57f5b1","Type":"ContainerStarted","Data":"627ff0108b48e746e5765be7f21700bc090380f3092e6c96a3d2434e653f75bd"} Nov 25 20:49:56 crc kubenswrapper[4775]: I1125 20:49:56.651793 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b11890d-23f8-421e-9647-0abace57f5b1" containerID="02e34b03a5b60261a22489e99f144678273b21dcb2e8f86d88d3ceaf02b9b457" exitCode=0 Nov 25 20:49:56 crc kubenswrapper[4775]: I1125 20:49:56.651839 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87kxf" event={"ID":"1b11890d-23f8-421e-9647-0abace57f5b1","Type":"ContainerDied","Data":"02e34b03a5b60261a22489e99f144678273b21dcb2e8f86d88d3ceaf02b9b457"} Nov 25 20:49:58 crc kubenswrapper[4775]: I1125 20:49:58.674357 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b11890d-23f8-421e-9647-0abace57f5b1" containerID="7a53982ec0bff19bfea90ec0feca3d9ba3f40e01a01aba93f7bb501265a99ac8" exitCode=0 Nov 25 20:49:58 crc kubenswrapper[4775]: I1125 20:49:58.674605 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87kxf" event={"ID":"1b11890d-23f8-421e-9647-0abace57f5b1","Type":"ContainerDied","Data":"7a53982ec0bff19bfea90ec0feca3d9ba3f40e01a01aba93f7bb501265a99ac8"} Nov 25 20:49:59 crc kubenswrapper[4775]: I1125 20:49:59.702872 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87kxf" event={"ID":"1b11890d-23f8-421e-9647-0abace57f5b1","Type":"ContainerStarted","Data":"9b97626fe6b2830b68b0d1fdc64f6b7a2023eb066690ed06f8b06344fc9df24d"} Nov 25 20:49:59 crc kubenswrapper[4775]: I1125 20:49:59.746105 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-87kxf" podStartSLOduration=3.295971052 podStartE2EDuration="5.746074567s" podCreationTimestamp="2025-11-25 20:49:54 +0000 UTC" firstStartedPulling="2025-11-25 20:49:56.654220321 +0000 UTC m=+4578.570582687" lastFinishedPulling="2025-11-25 20:49:59.104323826 +0000 UTC m=+4581.020686202" observedRunningTime="2025-11-25 20:49:59.720961346 +0000 UTC m=+4581.637323762" watchObservedRunningTime="2025-11-25 20:49:59.746074567 +0000 UTC m=+4581.662436973" Nov 25 20:49:59 crc kubenswrapper[4775]: I1125 20:49:59.847924 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:49:59 crc kubenswrapper[4775]: I1125 20:49:59.848376 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:49:59 crc kubenswrapper[4775]: E1125 20:49:59.848946 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:49:59 crc kubenswrapper[4775]: E1125 20:49:59.848944 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:50:04 crc kubenswrapper[4775]: I1125 20:50:04.848884 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:50:04 crc kubenswrapper[4775]: E1125 20:50:04.854116 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:50:04 crc kubenswrapper[4775]: I1125 20:50:04.961706 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:50:04 crc kubenswrapper[4775]: I1125 20:50:04.961772 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:50:05 crc kubenswrapper[4775]: I1125 20:50:05.035143 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:50:05 crc kubenswrapper[4775]: I1125 20:50:05.813880 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:50:05 crc kubenswrapper[4775]: I1125 20:50:05.871850 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-87kxf"] Nov 25 20:50:07 crc kubenswrapper[4775]: I1125 20:50:07.789578 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-87kxf" podUID="1b11890d-23f8-421e-9647-0abace57f5b1" containerName="registry-server" containerID="cri-o://9b97626fe6b2830b68b0d1fdc64f6b7a2023eb066690ed06f8b06344fc9df24d" gracePeriod=2 Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.257418 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.446611 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b11890d-23f8-421e-9647-0abace57f5b1-catalog-content\") pod \"1b11890d-23f8-421e-9647-0abace57f5b1\" (UID: \"1b11890d-23f8-421e-9647-0abace57f5b1\") " Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.446779 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66btq\" (UniqueName: \"kubernetes.io/projected/1b11890d-23f8-421e-9647-0abace57f5b1-kube-api-access-66btq\") pod \"1b11890d-23f8-421e-9647-0abace57f5b1\" (UID: \"1b11890d-23f8-421e-9647-0abace57f5b1\") " Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.446847 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b11890d-23f8-421e-9647-0abace57f5b1-utilities\") pod \"1b11890d-23f8-421e-9647-0abace57f5b1\" (UID: \"1b11890d-23f8-421e-9647-0abace57f5b1\") " Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.450128 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b11890d-23f8-421e-9647-0abace57f5b1-utilities" (OuterVolumeSpecName: "utilities") pod "1b11890d-23f8-421e-9647-0abace57f5b1" (UID: "1b11890d-23f8-421e-9647-0abace57f5b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.454973 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b11890d-23f8-421e-9647-0abace57f5b1-kube-api-access-66btq" (OuterVolumeSpecName: "kube-api-access-66btq") pod "1b11890d-23f8-421e-9647-0abace57f5b1" (UID: "1b11890d-23f8-421e-9647-0abace57f5b1"). InnerVolumeSpecName "kube-api-access-66btq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.483254 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b11890d-23f8-421e-9647-0abace57f5b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b11890d-23f8-421e-9647-0abace57f5b1" (UID: "1b11890d-23f8-421e-9647-0abace57f5b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.549902 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b11890d-23f8-421e-9647-0abace57f5b1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.549957 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66btq\" (UniqueName: \"kubernetes.io/projected/1b11890d-23f8-421e-9647-0abace57f5b1-kube-api-access-66btq\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.549982 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b11890d-23f8-421e-9647-0abace57f5b1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.806284 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b11890d-23f8-421e-9647-0abace57f5b1" containerID="9b97626fe6b2830b68b0d1fdc64f6b7a2023eb066690ed06f8b06344fc9df24d" exitCode=0 Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.806329 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87kxf" event={"ID":"1b11890d-23f8-421e-9647-0abace57f5b1","Type":"ContainerDied","Data":"9b97626fe6b2830b68b0d1fdc64f6b7a2023eb066690ed06f8b06344fc9df24d"} Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.806373 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87kxf" event={"ID":"1b11890d-23f8-421e-9647-0abace57f5b1","Type":"ContainerDied","Data":"627ff0108b48e746e5765be7f21700bc090380f3092e6c96a3d2434e653f75bd"} Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.806391 4775 scope.go:117] "RemoveContainer" containerID="9b97626fe6b2830b68b0d1fdc64f6b7a2023eb066690ed06f8b06344fc9df24d" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.806414 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-87kxf" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.854297 4775 scope.go:117] "RemoveContainer" containerID="7a53982ec0bff19bfea90ec0feca3d9ba3f40e01a01aba93f7bb501265a99ac8" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.877212 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-87kxf"] Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.877279 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-87kxf"] Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.881129 4775 scope.go:117] "RemoveContainer" containerID="02e34b03a5b60261a22489e99f144678273b21dcb2e8f86d88d3ceaf02b9b457" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.956509 4775 scope.go:117] "RemoveContainer" containerID="9b97626fe6b2830b68b0d1fdc64f6b7a2023eb066690ed06f8b06344fc9df24d" Nov 25 20:50:08 crc kubenswrapper[4775]: E1125 20:50:08.957096 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b97626fe6b2830b68b0d1fdc64f6b7a2023eb066690ed06f8b06344fc9df24d\": container with ID starting with 9b97626fe6b2830b68b0d1fdc64f6b7a2023eb066690ed06f8b06344fc9df24d not found: ID does not exist" containerID="9b97626fe6b2830b68b0d1fdc64f6b7a2023eb066690ed06f8b06344fc9df24d" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.957174 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b97626fe6b2830b68b0d1fdc64f6b7a2023eb066690ed06f8b06344fc9df24d"} err="failed to get container status \"9b97626fe6b2830b68b0d1fdc64f6b7a2023eb066690ed06f8b06344fc9df24d\": rpc error: code = NotFound desc = could not find container \"9b97626fe6b2830b68b0d1fdc64f6b7a2023eb066690ed06f8b06344fc9df24d\": container with ID starting with 9b97626fe6b2830b68b0d1fdc64f6b7a2023eb066690ed06f8b06344fc9df24d not found: ID does not exist" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.957204 4775 scope.go:117] "RemoveContainer" containerID="7a53982ec0bff19bfea90ec0feca3d9ba3f40e01a01aba93f7bb501265a99ac8" Nov 25 20:50:08 crc kubenswrapper[4775]: E1125 20:50:08.957962 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a53982ec0bff19bfea90ec0feca3d9ba3f40e01a01aba93f7bb501265a99ac8\": container with ID starting with 7a53982ec0bff19bfea90ec0feca3d9ba3f40e01a01aba93f7bb501265a99ac8 not found: ID does not exist" containerID="7a53982ec0bff19bfea90ec0feca3d9ba3f40e01a01aba93f7bb501265a99ac8" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.958018 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a53982ec0bff19bfea90ec0feca3d9ba3f40e01a01aba93f7bb501265a99ac8"} err="failed to get container status \"7a53982ec0bff19bfea90ec0feca3d9ba3f40e01a01aba93f7bb501265a99ac8\": rpc error: code = NotFound desc = could not find container \"7a53982ec0bff19bfea90ec0feca3d9ba3f40e01a01aba93f7bb501265a99ac8\": container with ID starting with 7a53982ec0bff19bfea90ec0feca3d9ba3f40e01a01aba93f7bb501265a99ac8 not found: ID does not exist" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.958049 4775 scope.go:117] "RemoveContainer" containerID="02e34b03a5b60261a22489e99f144678273b21dcb2e8f86d88d3ceaf02b9b457" Nov 25 20:50:08 crc kubenswrapper[4775]: E1125 20:50:08.958434 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02e34b03a5b60261a22489e99f144678273b21dcb2e8f86d88d3ceaf02b9b457\": container with ID starting with 02e34b03a5b60261a22489e99f144678273b21dcb2e8f86d88d3ceaf02b9b457 not found: ID does not exist" containerID="02e34b03a5b60261a22489e99f144678273b21dcb2e8f86d88d3ceaf02b9b457" Nov 25 20:50:08 crc kubenswrapper[4775]: I1125 20:50:08.958464 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e34b03a5b60261a22489e99f144678273b21dcb2e8f86d88d3ceaf02b9b457"} err="failed to get container status \"02e34b03a5b60261a22489e99f144678273b21dcb2e8f86d88d3ceaf02b9b457\": rpc error: code = NotFound desc = could not find container \"02e34b03a5b60261a22489e99f144678273b21dcb2e8f86d88d3ceaf02b9b457\": container with ID starting with 02e34b03a5b60261a22489e99f144678273b21dcb2e8f86d88d3ceaf02b9b457 not found: ID does not exist" Nov 25 20:50:10 crc kubenswrapper[4775]: I1125 20:50:10.863484 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b11890d-23f8-421e-9647-0abace57f5b1" path="/var/lib/kubelet/pods/1b11890d-23f8-421e-9647-0abace57f5b1/volumes" Nov 25 20:50:11 crc kubenswrapper[4775]: I1125 20:50:11.849174 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:50:11 crc kubenswrapper[4775]: E1125 20:50:11.850483 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:50:13 crc kubenswrapper[4775]: I1125 20:50:13.847881 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:50:13 crc kubenswrapper[4775]: E1125 20:50:13.849124 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:50:17 crc kubenswrapper[4775]: I1125 20:50:17.848506 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:50:17 crc kubenswrapper[4775]: E1125 20:50:17.849711 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:50:22 crc kubenswrapper[4775]: I1125 20:50:22.849004 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:50:22 crc kubenswrapper[4775]: E1125 20:50:22.851407 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:50:24 crc kubenswrapper[4775]: I1125 20:50:24.847815 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:50:24 crc kubenswrapper[4775]: E1125 20:50:24.848609 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:50:28 crc kubenswrapper[4775]: I1125 20:50:28.859464 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:50:28 crc kubenswrapper[4775]: E1125 20:50:28.860263 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:50:35 crc kubenswrapper[4775]: I1125 20:50:35.847802 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:50:35 crc kubenswrapper[4775]: E1125 20:50:35.848588 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:50:38 crc kubenswrapper[4775]: I1125 20:50:38.861930 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:50:38 crc kubenswrapper[4775]: E1125 20:50:38.862482 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:50:41 crc kubenswrapper[4775]: I1125 20:50:41.847106 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:50:41 crc kubenswrapper[4775]: E1125 20:50:41.847856 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:50:49 crc kubenswrapper[4775]: I1125 20:50:49.846936 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:50:50 crc kubenswrapper[4775]: I1125 20:50:50.272598 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"96a31c04b17396a85beeeb573cd247c8b10f0b0302f5a0e3481eb43aa4c1f294"} Nov 25 20:50:50 crc kubenswrapper[4775]: I1125 20:50:50.847180 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:50:50 crc kubenswrapper[4775]: E1125 20:50:50.848101 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:50:56 crc kubenswrapper[4775]: I1125 20:50:56.848474 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:50:56 crc kubenswrapper[4775]: E1125 20:50:56.849748 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:51:01 crc kubenswrapper[4775]: I1125 20:51:01.847711 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:51:01 crc kubenswrapper[4775]: E1125 20:51:01.848795 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.692382 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dqc5f"] Nov 25 20:51:05 crc kubenswrapper[4775]: E1125 20:51:05.693393 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b11890d-23f8-421e-9647-0abace57f5b1" containerName="extract-content" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.693412 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b11890d-23f8-421e-9647-0abace57f5b1" containerName="extract-content" Nov 25 20:51:05 crc kubenswrapper[4775]: E1125 20:51:05.693449 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b11890d-23f8-421e-9647-0abace57f5b1" containerName="registry-server" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.693457 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b11890d-23f8-421e-9647-0abace57f5b1" containerName="registry-server" Nov 25 20:51:05 crc kubenswrapper[4775]: E1125 20:51:05.693480 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b11890d-23f8-421e-9647-0abace57f5b1" containerName="extract-utilities" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.693489 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b11890d-23f8-421e-9647-0abace57f5b1" containerName="extract-utilities" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.693789 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b11890d-23f8-421e-9647-0abace57f5b1" containerName="registry-server" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.695467 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.722631 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dqc5f"] Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.840775 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59737aed-9ee6-404a-90cb-647f77904e7a-utilities\") pod \"community-operators-dqc5f\" (UID: \"59737aed-9ee6-404a-90cb-647f77904e7a\") " pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.840829 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf9bj\" (UniqueName: \"kubernetes.io/projected/59737aed-9ee6-404a-90cb-647f77904e7a-kube-api-access-wf9bj\") pod \"community-operators-dqc5f\" (UID: \"59737aed-9ee6-404a-90cb-647f77904e7a\") " pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.840918 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59737aed-9ee6-404a-90cb-647f77904e7a-catalog-content\") pod \"community-operators-dqc5f\" (UID: \"59737aed-9ee6-404a-90cb-647f77904e7a\") " pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.942946 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59737aed-9ee6-404a-90cb-647f77904e7a-utilities\") pod \"community-operators-dqc5f\" (UID: \"59737aed-9ee6-404a-90cb-647f77904e7a\") " pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.943356 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf9bj\" (UniqueName: \"kubernetes.io/projected/59737aed-9ee6-404a-90cb-647f77904e7a-kube-api-access-wf9bj\") pod \"community-operators-dqc5f\" (UID: \"59737aed-9ee6-404a-90cb-647f77904e7a\") " pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.943522 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59737aed-9ee6-404a-90cb-647f77904e7a-utilities\") pod \"community-operators-dqc5f\" (UID: \"59737aed-9ee6-404a-90cb-647f77904e7a\") " pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.943755 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59737aed-9ee6-404a-90cb-647f77904e7a-catalog-content\") pod \"community-operators-dqc5f\" (UID: \"59737aed-9ee6-404a-90cb-647f77904e7a\") " pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.944470 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59737aed-9ee6-404a-90cb-647f77904e7a-catalog-content\") pod \"community-operators-dqc5f\" (UID: \"59737aed-9ee6-404a-90cb-647f77904e7a\") " pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:05 crc kubenswrapper[4775]: I1125 20:51:05.964441 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf9bj\" (UniqueName: \"kubernetes.io/projected/59737aed-9ee6-404a-90cb-647f77904e7a-kube-api-access-wf9bj\") pod \"community-operators-dqc5f\" (UID: \"59737aed-9ee6-404a-90cb-647f77904e7a\") " pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:06 crc kubenswrapper[4775]: I1125 20:51:06.030412 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:06 crc kubenswrapper[4775]: I1125 20:51:06.553910 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dqc5f"] Nov 25 20:51:06 crc kubenswrapper[4775]: W1125 20:51:06.560868 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59737aed_9ee6_404a_90cb_647f77904e7a.slice/crio-31561acca7dd149db750a7964fb2af2285ee5e3c6557e60dffa3f17a3f0fa6ad WatchSource:0}: Error finding container 31561acca7dd149db750a7964fb2af2285ee5e3c6557e60dffa3f17a3f0fa6ad: Status 404 returned error can't find the container with id 31561acca7dd149db750a7964fb2af2285ee5e3c6557e60dffa3f17a3f0fa6ad Nov 25 20:51:07 crc kubenswrapper[4775]: I1125 20:51:07.443641 4775 generic.go:334] "Generic (PLEG): container finished" podID="59737aed-9ee6-404a-90cb-647f77904e7a" containerID="988f3f72c27b465c0218b2c6a0a61d5d73f07fdf9565b271b507367dfb16e504" exitCode=0 Nov 25 20:51:07 crc kubenswrapper[4775]: I1125 20:51:07.443748 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqc5f" event={"ID":"59737aed-9ee6-404a-90cb-647f77904e7a","Type":"ContainerDied","Data":"988f3f72c27b465c0218b2c6a0a61d5d73f07fdf9565b271b507367dfb16e504"} Nov 25 20:51:07 crc kubenswrapper[4775]: I1125 20:51:07.444054 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqc5f" event={"ID":"59737aed-9ee6-404a-90cb-647f77904e7a","Type":"ContainerStarted","Data":"31561acca7dd149db750a7964fb2af2285ee5e3c6557e60dffa3f17a3f0fa6ad"} Nov 25 20:51:08 crc kubenswrapper[4775]: I1125 20:51:08.453766 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqc5f" event={"ID":"59737aed-9ee6-404a-90cb-647f77904e7a","Type":"ContainerStarted","Data":"b4a14fd83c69e553fc36e5eeeae17835592979ba1967f7fad1777dfc297b1dc2"} Nov 25 20:51:08 crc kubenswrapper[4775]: E1125 20:51:08.707509 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59737aed_9ee6_404a_90cb_647f77904e7a.slice/crio-conmon-b4a14fd83c69e553fc36e5eeeae17835592979ba1967f7fad1777dfc297b1dc2.scope\": RecentStats: unable to find data in memory cache]" Nov 25 20:51:09 crc kubenswrapper[4775]: I1125 20:51:09.471634 4775 generic.go:334] "Generic (PLEG): container finished" podID="59737aed-9ee6-404a-90cb-647f77904e7a" containerID="b4a14fd83c69e553fc36e5eeeae17835592979ba1967f7fad1777dfc297b1dc2" exitCode=0 Nov 25 20:51:09 crc kubenswrapper[4775]: I1125 20:51:09.471725 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqc5f" event={"ID":"59737aed-9ee6-404a-90cb-647f77904e7a","Type":"ContainerDied","Data":"b4a14fd83c69e553fc36e5eeeae17835592979ba1967f7fad1777dfc297b1dc2"} Nov 25 20:51:09 crc kubenswrapper[4775]: I1125 20:51:09.847972 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:51:09 crc kubenswrapper[4775]: E1125 20:51:09.848157 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:51:10 crc kubenswrapper[4775]: I1125 20:51:10.490199 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqc5f" event={"ID":"59737aed-9ee6-404a-90cb-647f77904e7a","Type":"ContainerStarted","Data":"02537bad880e10248120301c4e5a79c927659afeec041f1e7b8a9acb4af8b7b0"} Nov 25 20:51:10 crc kubenswrapper[4775]: I1125 20:51:10.529029 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dqc5f" podStartSLOduration=2.850082505 podStartE2EDuration="5.528995557s" podCreationTimestamp="2025-11-25 20:51:05 +0000 UTC" firstStartedPulling="2025-11-25 20:51:07.446629305 +0000 UTC m=+4649.362991701" lastFinishedPulling="2025-11-25 20:51:10.125542377 +0000 UTC m=+4652.041904753" observedRunningTime="2025-11-25 20:51:10.518704552 +0000 UTC m=+4652.435066958" watchObservedRunningTime="2025-11-25 20:51:10.528995557 +0000 UTC m=+4652.445357963" Nov 25 20:51:15 crc kubenswrapper[4775]: I1125 20:51:15.848163 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:51:15 crc kubenswrapper[4775]: E1125 20:51:15.849235 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:51:16 crc kubenswrapper[4775]: I1125 20:51:16.030625 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:16 crc kubenswrapper[4775]: I1125 20:51:16.030967 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:16 crc kubenswrapper[4775]: I1125 20:51:16.080173 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:16 crc kubenswrapper[4775]: I1125 20:51:16.646111 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:16 crc kubenswrapper[4775]: I1125 20:51:16.727133 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dqc5f"] Nov 25 20:51:18 crc kubenswrapper[4775]: I1125 20:51:18.582033 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dqc5f" podUID="59737aed-9ee6-404a-90cb-647f77904e7a" containerName="registry-server" containerID="cri-o://02537bad880e10248120301c4e5a79c927659afeec041f1e7b8a9acb4af8b7b0" gracePeriod=2 Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.129262 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.289720 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf9bj\" (UniqueName: \"kubernetes.io/projected/59737aed-9ee6-404a-90cb-647f77904e7a-kube-api-access-wf9bj\") pod \"59737aed-9ee6-404a-90cb-647f77904e7a\" (UID: \"59737aed-9ee6-404a-90cb-647f77904e7a\") " Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.289842 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59737aed-9ee6-404a-90cb-647f77904e7a-utilities\") pod \"59737aed-9ee6-404a-90cb-647f77904e7a\" (UID: \"59737aed-9ee6-404a-90cb-647f77904e7a\") " Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.290042 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59737aed-9ee6-404a-90cb-647f77904e7a-catalog-content\") pod \"59737aed-9ee6-404a-90cb-647f77904e7a\" (UID: \"59737aed-9ee6-404a-90cb-647f77904e7a\") " Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.292280 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59737aed-9ee6-404a-90cb-647f77904e7a-utilities" (OuterVolumeSpecName: "utilities") pod "59737aed-9ee6-404a-90cb-647f77904e7a" (UID: "59737aed-9ee6-404a-90cb-647f77904e7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.300931 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59737aed-9ee6-404a-90cb-647f77904e7a-kube-api-access-wf9bj" (OuterVolumeSpecName: "kube-api-access-wf9bj") pod "59737aed-9ee6-404a-90cb-647f77904e7a" (UID: "59737aed-9ee6-404a-90cb-647f77904e7a"). InnerVolumeSpecName "kube-api-access-wf9bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.392848 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf9bj\" (UniqueName: \"kubernetes.io/projected/59737aed-9ee6-404a-90cb-647f77904e7a-kube-api-access-wf9bj\") on node \"crc\" DevicePath \"\"" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.392888 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59737aed-9ee6-404a-90cb-647f77904e7a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.594997 4775 generic.go:334] "Generic (PLEG): container finished" podID="59737aed-9ee6-404a-90cb-647f77904e7a" containerID="02537bad880e10248120301c4e5a79c927659afeec041f1e7b8a9acb4af8b7b0" exitCode=0 Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.595072 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqc5f" event={"ID":"59737aed-9ee6-404a-90cb-647f77904e7a","Type":"ContainerDied","Data":"02537bad880e10248120301c4e5a79c927659afeec041f1e7b8a9acb4af8b7b0"} Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.595103 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqc5f" event={"ID":"59737aed-9ee6-404a-90cb-647f77904e7a","Type":"ContainerDied","Data":"31561acca7dd149db750a7964fb2af2285ee5e3c6557e60dffa3f17a3f0fa6ad"} Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.595124 4775 scope.go:117] "RemoveContainer" containerID="02537bad880e10248120301c4e5a79c927659afeec041f1e7b8a9acb4af8b7b0" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.595248 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqc5f" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.627507 4775 scope.go:117] "RemoveContainer" containerID="b4a14fd83c69e553fc36e5eeeae17835592979ba1967f7fad1777dfc297b1dc2" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.655090 4775 scope.go:117] "RemoveContainer" containerID="988f3f72c27b465c0218b2c6a0a61d5d73f07fdf9565b271b507367dfb16e504" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.678837 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59737aed-9ee6-404a-90cb-647f77904e7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59737aed-9ee6-404a-90cb-647f77904e7a" (UID: "59737aed-9ee6-404a-90cb-647f77904e7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.700625 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59737aed-9ee6-404a-90cb-647f77904e7a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.727104 4775 scope.go:117] "RemoveContainer" containerID="02537bad880e10248120301c4e5a79c927659afeec041f1e7b8a9acb4af8b7b0" Nov 25 20:51:19 crc kubenswrapper[4775]: E1125 20:51:19.727882 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02537bad880e10248120301c4e5a79c927659afeec041f1e7b8a9acb4af8b7b0\": container with ID starting with 02537bad880e10248120301c4e5a79c927659afeec041f1e7b8a9acb4af8b7b0 not found: ID does not exist" containerID="02537bad880e10248120301c4e5a79c927659afeec041f1e7b8a9acb4af8b7b0" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.727924 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02537bad880e10248120301c4e5a79c927659afeec041f1e7b8a9acb4af8b7b0"} err="failed to get container status \"02537bad880e10248120301c4e5a79c927659afeec041f1e7b8a9acb4af8b7b0\": rpc error: code = NotFound desc = could not find container \"02537bad880e10248120301c4e5a79c927659afeec041f1e7b8a9acb4af8b7b0\": container with ID starting with 02537bad880e10248120301c4e5a79c927659afeec041f1e7b8a9acb4af8b7b0 not found: ID does not exist" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.727949 4775 scope.go:117] "RemoveContainer" containerID="b4a14fd83c69e553fc36e5eeeae17835592979ba1967f7fad1777dfc297b1dc2" Nov 25 20:51:19 crc kubenswrapper[4775]: E1125 20:51:19.728493 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4a14fd83c69e553fc36e5eeeae17835592979ba1967f7fad1777dfc297b1dc2\": container with ID starting with b4a14fd83c69e553fc36e5eeeae17835592979ba1967f7fad1777dfc297b1dc2 not found: ID does not exist" containerID="b4a14fd83c69e553fc36e5eeeae17835592979ba1967f7fad1777dfc297b1dc2" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.728520 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a14fd83c69e553fc36e5eeeae17835592979ba1967f7fad1777dfc297b1dc2"} err="failed to get container status \"b4a14fd83c69e553fc36e5eeeae17835592979ba1967f7fad1777dfc297b1dc2\": rpc error: code = NotFound desc = could not find container \"b4a14fd83c69e553fc36e5eeeae17835592979ba1967f7fad1777dfc297b1dc2\": container with ID starting with b4a14fd83c69e553fc36e5eeeae17835592979ba1967f7fad1777dfc297b1dc2 not found: ID does not exist" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.728538 4775 scope.go:117] "RemoveContainer" containerID="988f3f72c27b465c0218b2c6a0a61d5d73f07fdf9565b271b507367dfb16e504" Nov 25 20:51:19 crc kubenswrapper[4775]: E1125 20:51:19.728875 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"988f3f72c27b465c0218b2c6a0a61d5d73f07fdf9565b271b507367dfb16e504\": container with ID starting with 988f3f72c27b465c0218b2c6a0a61d5d73f07fdf9565b271b507367dfb16e504 not found: ID does not exist" containerID="988f3f72c27b465c0218b2c6a0a61d5d73f07fdf9565b271b507367dfb16e504" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.728900 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988f3f72c27b465c0218b2c6a0a61d5d73f07fdf9565b271b507367dfb16e504"} err="failed to get container status \"988f3f72c27b465c0218b2c6a0a61d5d73f07fdf9565b271b507367dfb16e504\": rpc error: code = NotFound desc = could not find container \"988f3f72c27b465c0218b2c6a0a61d5d73f07fdf9565b271b507367dfb16e504\": container with ID starting with 988f3f72c27b465c0218b2c6a0a61d5d73f07fdf9565b271b507367dfb16e504 not found: ID does not exist" Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.944409 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dqc5f"] Nov 25 20:51:19 crc kubenswrapper[4775]: I1125 20:51:19.952635 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dqc5f"] Nov 25 20:51:20 crc kubenswrapper[4775]: I1125 20:51:20.864444 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59737aed-9ee6-404a-90cb-647f77904e7a" path="/var/lib/kubelet/pods/59737aed-9ee6-404a-90cb-647f77904e7a/volumes" Nov 25 20:51:21 crc kubenswrapper[4775]: I1125 20:51:21.848576 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:51:21 crc kubenswrapper[4775]: E1125 20:51:21.849432 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:51:27 crc kubenswrapper[4775]: I1125 20:51:27.847503 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:51:27 crc kubenswrapper[4775]: E1125 20:51:27.848867 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:51:34 crc kubenswrapper[4775]: I1125 20:51:34.847150 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:51:34 crc kubenswrapper[4775]: E1125 20:51:34.848193 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:51:40 crc kubenswrapper[4775]: I1125 20:51:40.848859 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:51:40 crc kubenswrapper[4775]: E1125 20:51:40.850344 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:51:47 crc kubenswrapper[4775]: I1125 20:51:47.847393 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:51:47 crc kubenswrapper[4775]: E1125 20:51:47.848155 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:51:51 crc kubenswrapper[4775]: I1125 20:51:51.847296 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:51:51 crc kubenswrapper[4775]: E1125 20:51:51.848725 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:52:02 crc kubenswrapper[4775]: I1125 20:52:02.847152 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:52:02 crc kubenswrapper[4775]: E1125 20:52:02.848134 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:52:04 crc kubenswrapper[4775]: I1125 20:52:04.847561 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:52:04 crc kubenswrapper[4775]: E1125 20:52:04.848537 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:52:17 crc kubenswrapper[4775]: I1125 20:52:17.847894 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:52:17 crc kubenswrapper[4775]: E1125 20:52:17.850529 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:52:19 crc kubenswrapper[4775]: I1125 20:52:19.847524 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:52:19 crc kubenswrapper[4775]: E1125 20:52:19.848167 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:52:30 crc kubenswrapper[4775]: I1125 20:52:30.848757 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:52:30 crc kubenswrapper[4775]: E1125 20:52:30.850001 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:52:34 crc kubenswrapper[4775]: I1125 20:52:34.847684 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:52:34 crc kubenswrapper[4775]: E1125 20:52:34.848716 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.110430 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-frvh8"] Nov 25 20:52:37 crc kubenswrapper[4775]: E1125 20:52:37.111505 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59737aed-9ee6-404a-90cb-647f77904e7a" containerName="registry-server" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.111532 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="59737aed-9ee6-404a-90cb-647f77904e7a" containerName="registry-server" Nov 25 20:52:37 crc kubenswrapper[4775]: E1125 20:52:37.111547 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59737aed-9ee6-404a-90cb-647f77904e7a" containerName="extract-content" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.111562 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="59737aed-9ee6-404a-90cb-647f77904e7a" containerName="extract-content" Nov 25 20:52:37 crc kubenswrapper[4775]: E1125 20:52:37.111593 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59737aed-9ee6-404a-90cb-647f77904e7a" containerName="extract-utilities" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.111608 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="59737aed-9ee6-404a-90cb-647f77904e7a" containerName="extract-utilities" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.112040 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="59737aed-9ee6-404a-90cb-647f77904e7a" containerName="registry-server" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.114801 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.120490 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-frvh8"] Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.196962 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z2wv\" (UniqueName: \"kubernetes.io/projected/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-kube-api-access-9z2wv\") pod \"redhat-operators-frvh8\" (UID: \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\") " pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.197139 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-utilities\") pod \"redhat-operators-frvh8\" (UID: \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\") " pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.197174 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-catalog-content\") pod \"redhat-operators-frvh8\" (UID: \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\") " pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.298627 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-utilities\") pod \"redhat-operators-frvh8\" (UID: \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\") " pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.298704 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-catalog-content\") pod \"redhat-operators-frvh8\" (UID: \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\") " pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.298823 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z2wv\" (UniqueName: \"kubernetes.io/projected/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-kube-api-access-9z2wv\") pod \"redhat-operators-frvh8\" (UID: \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\") " pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.299149 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-utilities\") pod \"redhat-operators-frvh8\" (UID: \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\") " pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.299234 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-catalog-content\") pod \"redhat-operators-frvh8\" (UID: \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\") " pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.328570 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z2wv\" (UniqueName: \"kubernetes.io/projected/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-kube-api-access-9z2wv\") pod \"redhat-operators-frvh8\" (UID: \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\") " pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.452715 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:37 crc kubenswrapper[4775]: I1125 20:52:37.966943 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-frvh8"] Nov 25 20:52:38 crc kubenswrapper[4775]: I1125 20:52:38.501389 4775 generic.go:334] "Generic (PLEG): container finished" podID="5a3d14d5-abc0-439d-a393-2fd6c21a0d51" containerID="eb5edff22883d6c036fa9fc17366822a7aa68d46b84db0cce4955894117be5ca" exitCode=0 Nov 25 20:52:38 crc kubenswrapper[4775]: I1125 20:52:38.501443 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-frvh8" event={"ID":"5a3d14d5-abc0-439d-a393-2fd6c21a0d51","Type":"ContainerDied","Data":"eb5edff22883d6c036fa9fc17366822a7aa68d46b84db0cce4955894117be5ca"} Nov 25 20:52:38 crc kubenswrapper[4775]: I1125 20:52:38.501713 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-frvh8" event={"ID":"5a3d14d5-abc0-439d-a393-2fd6c21a0d51","Type":"ContainerStarted","Data":"4deab3436b82bc5e437583685acd3ba25ab9ea53b11b7ab1400c651af547db0c"} Nov 25 20:52:38 crc kubenswrapper[4775]: I1125 20:52:38.504396 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:52:40 crc kubenswrapper[4775]: I1125 20:52:40.527382 4775 generic.go:334] "Generic (PLEG): container finished" podID="5a3d14d5-abc0-439d-a393-2fd6c21a0d51" containerID="10c553b245e2f307b7e360f795cb9f7174a16244cb261225b90319145747b20d" exitCode=0 Nov 25 20:52:40 crc kubenswrapper[4775]: I1125 20:52:40.527438 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-frvh8" event={"ID":"5a3d14d5-abc0-439d-a393-2fd6c21a0d51","Type":"ContainerDied","Data":"10c553b245e2f307b7e360f795cb9f7174a16244cb261225b90319145747b20d"} Nov 25 20:52:41 crc kubenswrapper[4775]: I1125 20:52:41.538178 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-frvh8" event={"ID":"5a3d14d5-abc0-439d-a393-2fd6c21a0d51","Type":"ContainerStarted","Data":"5ba3c34304320fc68342f058ade45269b94d18896e7cfbaf5935ed86e94a0ebb"} Nov 25 20:52:41 crc kubenswrapper[4775]: I1125 20:52:41.557740 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-frvh8" podStartSLOduration=2.004389068 podStartE2EDuration="4.557722548s" podCreationTimestamp="2025-11-25 20:52:37 +0000 UTC" firstStartedPulling="2025-11-25 20:52:38.504007071 +0000 UTC m=+4740.420369447" lastFinishedPulling="2025-11-25 20:52:41.057340541 +0000 UTC m=+4742.973702927" observedRunningTime="2025-11-25 20:52:41.554019519 +0000 UTC m=+4743.470381915" watchObservedRunningTime="2025-11-25 20:52:41.557722548 +0000 UTC m=+4743.474084914" Nov 25 20:52:43 crc kubenswrapper[4775]: I1125 20:52:43.848011 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:52:43 crc kubenswrapper[4775]: E1125 20:52:43.848854 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:52:47 crc kubenswrapper[4775]: I1125 20:52:47.454006 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:47 crc kubenswrapper[4775]: I1125 20:52:47.454829 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:47 crc kubenswrapper[4775]: I1125 20:52:47.541160 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:47 crc kubenswrapper[4775]: I1125 20:52:47.664349 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:47 crc kubenswrapper[4775]: I1125 20:52:47.798394 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-frvh8"] Nov 25 20:52:47 crc kubenswrapper[4775]: I1125 20:52:47.846944 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:52:47 crc kubenswrapper[4775]: E1125 20:52:47.847716 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:52:49 crc kubenswrapper[4775]: I1125 20:52:49.629086 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-frvh8" podUID="5a3d14d5-abc0-439d-a393-2fd6c21a0d51" containerName="registry-server" containerID="cri-o://5ba3c34304320fc68342f058ade45269b94d18896e7cfbaf5935ed86e94a0ebb" gracePeriod=2 Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.117943 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.183193 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z2wv\" (UniqueName: \"kubernetes.io/projected/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-kube-api-access-9z2wv\") pod \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\" (UID: \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\") " Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.183261 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-catalog-content\") pod \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\" (UID: \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\") " Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.183416 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-utilities\") pod \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\" (UID: \"5a3d14d5-abc0-439d-a393-2fd6c21a0d51\") " Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.185062 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-utilities" (OuterVolumeSpecName: "utilities") pod "5a3d14d5-abc0-439d-a393-2fd6c21a0d51" (UID: "5a3d14d5-abc0-439d-a393-2fd6c21a0d51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.189821 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-kube-api-access-9z2wv" (OuterVolumeSpecName: "kube-api-access-9z2wv") pod "5a3d14d5-abc0-439d-a393-2fd6c21a0d51" (UID: "5a3d14d5-abc0-439d-a393-2fd6c21a0d51"). InnerVolumeSpecName "kube-api-access-9z2wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.285329 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z2wv\" (UniqueName: \"kubernetes.io/projected/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-kube-api-access-9z2wv\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.285354 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.292271 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a3d14d5-abc0-439d-a393-2fd6c21a0d51" (UID: "5a3d14d5-abc0-439d-a393-2fd6c21a0d51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.387816 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a3d14d5-abc0-439d-a393-2fd6c21a0d51-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.648215 4775 generic.go:334] "Generic (PLEG): container finished" podID="5a3d14d5-abc0-439d-a393-2fd6c21a0d51" containerID="5ba3c34304320fc68342f058ade45269b94d18896e7cfbaf5935ed86e94a0ebb" exitCode=0 Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.648276 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-frvh8" event={"ID":"5a3d14d5-abc0-439d-a393-2fd6c21a0d51","Type":"ContainerDied","Data":"5ba3c34304320fc68342f058ade45269b94d18896e7cfbaf5935ed86e94a0ebb"} Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.648320 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-frvh8" event={"ID":"5a3d14d5-abc0-439d-a393-2fd6c21a0d51","Type":"ContainerDied","Data":"4deab3436b82bc5e437583685acd3ba25ab9ea53b11b7ab1400c651af547db0c"} Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.648281 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-frvh8" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.648352 4775 scope.go:117] "RemoveContainer" containerID="5ba3c34304320fc68342f058ade45269b94d18896e7cfbaf5935ed86e94a0ebb" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.693452 4775 scope.go:117] "RemoveContainer" containerID="10c553b245e2f307b7e360f795cb9f7174a16244cb261225b90319145747b20d" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.705344 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-frvh8"] Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.716150 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-frvh8"] Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.735780 4775 scope.go:117] "RemoveContainer" containerID="eb5edff22883d6c036fa9fc17366822a7aa68d46b84db0cce4955894117be5ca" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.796301 4775 scope.go:117] "RemoveContainer" containerID="5ba3c34304320fc68342f058ade45269b94d18896e7cfbaf5935ed86e94a0ebb" Nov 25 20:52:50 crc kubenswrapper[4775]: E1125 20:52:50.797326 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ba3c34304320fc68342f058ade45269b94d18896e7cfbaf5935ed86e94a0ebb\": container with ID starting with 5ba3c34304320fc68342f058ade45269b94d18896e7cfbaf5935ed86e94a0ebb not found: ID does not exist" containerID="5ba3c34304320fc68342f058ade45269b94d18896e7cfbaf5935ed86e94a0ebb" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.797427 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba3c34304320fc68342f058ade45269b94d18896e7cfbaf5935ed86e94a0ebb"} err="failed to get container status \"5ba3c34304320fc68342f058ade45269b94d18896e7cfbaf5935ed86e94a0ebb\": rpc error: code = NotFound desc = could not find container \"5ba3c34304320fc68342f058ade45269b94d18896e7cfbaf5935ed86e94a0ebb\": container with ID starting with 5ba3c34304320fc68342f058ade45269b94d18896e7cfbaf5935ed86e94a0ebb not found: ID does not exist" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.797476 4775 scope.go:117] "RemoveContainer" containerID="10c553b245e2f307b7e360f795cb9f7174a16244cb261225b90319145747b20d" Nov 25 20:52:50 crc kubenswrapper[4775]: E1125 20:52:50.798036 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c553b245e2f307b7e360f795cb9f7174a16244cb261225b90319145747b20d\": container with ID starting with 10c553b245e2f307b7e360f795cb9f7174a16244cb261225b90319145747b20d not found: ID does not exist" containerID="10c553b245e2f307b7e360f795cb9f7174a16244cb261225b90319145747b20d" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.798073 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c553b245e2f307b7e360f795cb9f7174a16244cb261225b90319145747b20d"} err="failed to get container status \"10c553b245e2f307b7e360f795cb9f7174a16244cb261225b90319145747b20d\": rpc error: code = NotFound desc = could not find container \"10c553b245e2f307b7e360f795cb9f7174a16244cb261225b90319145747b20d\": container with ID starting with 10c553b245e2f307b7e360f795cb9f7174a16244cb261225b90319145747b20d not found: ID does not exist" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.798102 4775 scope.go:117] "RemoveContainer" containerID="eb5edff22883d6c036fa9fc17366822a7aa68d46b84db0cce4955894117be5ca" Nov 25 20:52:50 crc kubenswrapper[4775]: E1125 20:52:50.798560 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb5edff22883d6c036fa9fc17366822a7aa68d46b84db0cce4955894117be5ca\": container with ID starting with eb5edff22883d6c036fa9fc17366822a7aa68d46b84db0cce4955894117be5ca not found: ID does not exist" containerID="eb5edff22883d6c036fa9fc17366822a7aa68d46b84db0cce4955894117be5ca" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.798622 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb5edff22883d6c036fa9fc17366822a7aa68d46b84db0cce4955894117be5ca"} err="failed to get container status \"eb5edff22883d6c036fa9fc17366822a7aa68d46b84db0cce4955894117be5ca\": rpc error: code = NotFound desc = could not find container \"eb5edff22883d6c036fa9fc17366822a7aa68d46b84db0cce4955894117be5ca\": container with ID starting with eb5edff22883d6c036fa9fc17366822a7aa68d46b84db0cce4955894117be5ca not found: ID does not exist" Nov 25 20:52:50 crc kubenswrapper[4775]: I1125 20:52:50.862189 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a3d14d5-abc0-439d-a393-2fd6c21a0d51" path="/var/lib/kubelet/pods/5a3d14d5-abc0-439d-a393-2fd6c21a0d51/volumes" Nov 25 20:52:55 crc kubenswrapper[4775]: I1125 20:52:55.847687 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:52:55 crc kubenswrapper[4775]: E1125 20:52:55.848628 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:53:02 crc kubenswrapper[4775]: I1125 20:53:02.870769 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:53:03 crc kubenswrapper[4775]: I1125 20:53:03.801325 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d"} Nov 25 20:53:05 crc kubenswrapper[4775]: I1125 20:53:05.828862 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" exitCode=1 Nov 25 20:53:05 crc kubenswrapper[4775]: I1125 20:53:05.828925 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerDied","Data":"68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d"} Nov 25 20:53:05 crc kubenswrapper[4775]: I1125 20:53:05.829613 4775 scope.go:117] "RemoveContainer" containerID="8e47ec4d338278a08aeca98d977aa14b793774eba5f458c71908e6ac52dcc3e9" Nov 25 20:53:05 crc kubenswrapper[4775]: I1125 20:53:05.830618 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:53:05 crc kubenswrapper[4775]: E1125 20:53:05.831366 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:53:06 crc kubenswrapper[4775]: I1125 20:53:06.848828 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:53:07 crc kubenswrapper[4775]: I1125 20:53:07.854166 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"43b0f1f3197fcb924cd792eca8a540c7759b457f997871665504c5fc1e063935"} Nov 25 20:53:07 crc kubenswrapper[4775]: I1125 20:53:07.855085 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:53:11 crc kubenswrapper[4775]: I1125 20:53:11.071348 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:53:11 crc kubenswrapper[4775]: I1125 20:53:11.072300 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:53:13 crc kubenswrapper[4775]: I1125 20:53:13.104792 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:53:13 crc kubenswrapper[4775]: I1125 20:53:13.105159 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:53:13 crc kubenswrapper[4775]: I1125 20:53:13.107932 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:53:13 crc kubenswrapper[4775]: E1125 20:53:13.110202 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:53:23 crc kubenswrapper[4775]: I1125 20:53:23.106908 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:53:23 crc kubenswrapper[4775]: I1125 20:53:23.108676 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:53:23 crc kubenswrapper[4775]: E1125 20:53:23.109178 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:53:23 crc kubenswrapper[4775]: I1125 20:53:23.193192 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:53:23 crc kubenswrapper[4775]: I1125 20:53:23.200664 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:53:33 crc kubenswrapper[4775]: I1125 20:53:33.148250 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:53:33 crc kubenswrapper[4775]: I1125 20:53:33.158803 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:53:34 crc kubenswrapper[4775]: I1125 20:53:34.847901 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:53:34 crc kubenswrapper[4775]: E1125 20:53:34.848544 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:53:41 crc kubenswrapper[4775]: I1125 20:53:41.070604 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:53:41 crc kubenswrapper[4775]: I1125 20:53:41.072240 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:53:42 crc kubenswrapper[4775]: I1125 20:53:42.205854 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:53:42 crc kubenswrapper[4775]: I1125 20:53:42.206337 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:53:42 crc kubenswrapper[4775]: I1125 20:53:42.207732 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:53:42 crc kubenswrapper[4775]: I1125 20:53:42.208022 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"43b0f1f3197fcb924cd792eca8a540c7759b457f997871665504c5fc1e063935"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:53:42 crc kubenswrapper[4775]: I1125 20:53:42.208117 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://43b0f1f3197fcb924cd792eca8a540c7759b457f997871665504c5fc1e063935" gracePeriod=30 Nov 25 20:53:42 crc kubenswrapper[4775]: I1125 20:53:42.217054 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:53:46 crc kubenswrapper[4775]: I1125 20:53:46.401774 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="43b0f1f3197fcb924cd792eca8a540c7759b457f997871665504c5fc1e063935" exitCode=0 Nov 25 20:53:46 crc kubenswrapper[4775]: I1125 20:53:46.401841 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"43b0f1f3197fcb924cd792eca8a540c7759b457f997871665504c5fc1e063935"} Nov 25 20:53:46 crc kubenswrapper[4775]: I1125 20:53:46.402518 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8"} Nov 25 20:53:46 crc kubenswrapper[4775]: I1125 20:53:46.402545 4775 scope.go:117] "RemoveContainer" containerID="1d40eb64e4629134a4b8118c59422731d288afb7f09b20fba695541b24dc6d3e" Nov 25 20:53:46 crc kubenswrapper[4775]: I1125 20:53:46.402926 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:53:47 crc kubenswrapper[4775]: I1125 20:53:47.846561 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:53:47 crc kubenswrapper[4775]: E1125 20:53:47.847114 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:54:01 crc kubenswrapper[4775]: I1125 20:54:01.847154 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:54:01 crc kubenswrapper[4775]: E1125 20:54:01.848068 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:54:03 crc kubenswrapper[4775]: I1125 20:54:03.225563 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:54:03 crc kubenswrapper[4775]: I1125 20:54:03.235124 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:54:11 crc kubenswrapper[4775]: I1125 20:54:11.070322 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:54:11 crc kubenswrapper[4775]: I1125 20:54:11.071105 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:54:11 crc kubenswrapper[4775]: I1125 20:54:11.071182 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 20:54:11 crc kubenswrapper[4775]: I1125 20:54:11.072524 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96a31c04b17396a85beeeb573cd247c8b10f0b0302f5a0e3481eb43aa4c1f294"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:54:11 crc kubenswrapper[4775]: I1125 20:54:11.072630 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://96a31c04b17396a85beeeb573cd247c8b10f0b0302f5a0e3481eb43aa4c1f294" gracePeriod=600 Nov 25 20:54:11 crc kubenswrapper[4775]: I1125 20:54:11.678529 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="96a31c04b17396a85beeeb573cd247c8b10f0b0302f5a0e3481eb43aa4c1f294" exitCode=0 Nov 25 20:54:11 crc kubenswrapper[4775]: I1125 20:54:11.678585 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"96a31c04b17396a85beeeb573cd247c8b10f0b0302f5a0e3481eb43aa4c1f294"} Nov 25 20:54:11 crc kubenswrapper[4775]: I1125 20:54:11.678987 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886"} Nov 25 20:54:11 crc kubenswrapper[4775]: I1125 20:54:11.679013 4775 scope.go:117] "RemoveContainer" containerID="5c147bafbce6334e1ac7b1e8a5bf60a1d2e67c4e2dce20831e63cff859f5be74" Nov 25 20:54:12 crc kubenswrapper[4775]: I1125 20:54:12.848352 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:54:12 crc kubenswrapper[4775]: E1125 20:54:12.849508 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:54:13 crc kubenswrapper[4775]: I1125 20:54:13.082541 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:54:13 crc kubenswrapper[4775]: I1125 20:54:13.213593 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:54:22 crc kubenswrapper[4775]: I1125 20:54:22.202278 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:54:22 crc kubenswrapper[4775]: I1125 20:54:22.205281 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:54:22 crc kubenswrapper[4775]: I1125 20:54:22.205330 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 20:54:22 crc kubenswrapper[4775]: I1125 20:54:22.206193 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 20:54:22 crc kubenswrapper[4775]: I1125 20:54:22.206229 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" gracePeriod=30 Nov 25 20:54:22 crc kubenswrapper[4775]: I1125 20:54:22.214468 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="Get \"https://10.217.0.245:8786/healthcheck\": EOF" Nov 25 20:54:25 crc kubenswrapper[4775]: E1125 20:54:25.447328 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:54:25 crc kubenswrapper[4775]: I1125 20:54:25.846812 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" exitCode=0 Nov 25 20:54:25 crc kubenswrapper[4775]: I1125 20:54:25.846871 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8"} Nov 25 20:54:25 crc kubenswrapper[4775]: I1125 20:54:25.846918 4775 scope.go:117] "RemoveContainer" containerID="43b0f1f3197fcb924cd792eca8a540c7759b457f997871665504c5fc1e063935" Nov 25 20:54:25 crc kubenswrapper[4775]: I1125 20:54:25.847891 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:54:25 crc kubenswrapper[4775]: E1125 20:54:25.848337 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:54:25 crc kubenswrapper[4775]: I1125 20:54:25.848534 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:54:25 crc kubenswrapper[4775]: E1125 20:54:25.849411 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:54:37 crc kubenswrapper[4775]: I1125 20:54:37.847545 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:54:37 crc kubenswrapper[4775]: E1125 20:54:37.848765 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:54:39 crc kubenswrapper[4775]: I1125 20:54:39.847911 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:54:39 crc kubenswrapper[4775]: E1125 20:54:39.848403 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:54:50 crc kubenswrapper[4775]: I1125 20:54:50.847519 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:54:50 crc kubenswrapper[4775]: E1125 20:54:50.848420 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:54:51 crc kubenswrapper[4775]: I1125 20:54:51.849824 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:54:51 crc kubenswrapper[4775]: E1125 20:54:51.851019 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:55:02 crc kubenswrapper[4775]: I1125 20:55:02.848588 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:55:02 crc kubenswrapper[4775]: E1125 20:55:02.849769 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:55:06 crc kubenswrapper[4775]: I1125 20:55:06.848019 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:55:06 crc kubenswrapper[4775]: E1125 20:55:06.849196 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:55:15 crc kubenswrapper[4775]: I1125 20:55:15.847159 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:55:15 crc kubenswrapper[4775]: E1125 20:55:15.848002 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:55:17 crc kubenswrapper[4775]: I1125 20:55:17.847784 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:55:17 crc kubenswrapper[4775]: E1125 20:55:17.848551 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:55:26 crc kubenswrapper[4775]: I1125 20:55:26.846981 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:55:26 crc kubenswrapper[4775]: E1125 20:55:26.848455 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:55:28 crc kubenswrapper[4775]: I1125 20:55:28.861523 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:55:28 crc kubenswrapper[4775]: E1125 20:55:28.862118 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:55:41 crc kubenswrapper[4775]: I1125 20:55:41.847263 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:55:41 crc kubenswrapper[4775]: E1125 20:55:41.847937 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:55:43 crc kubenswrapper[4775]: I1125 20:55:43.848609 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:55:43 crc kubenswrapper[4775]: E1125 20:55:43.849515 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:55:55 crc kubenswrapper[4775]: I1125 20:55:55.847515 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:55:55 crc kubenswrapper[4775]: E1125 20:55:55.848231 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:55:56 crc kubenswrapper[4775]: I1125 20:55:56.848317 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:55:56 crc kubenswrapper[4775]: E1125 20:55:56.848731 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:56:09 crc kubenswrapper[4775]: I1125 20:56:09.847816 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:56:09 crc kubenswrapper[4775]: E1125 20:56:09.848828 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:56:10 crc kubenswrapper[4775]: I1125 20:56:10.848203 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:56:10 crc kubenswrapper[4775]: E1125 20:56:10.849196 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:56:11 crc kubenswrapper[4775]: I1125 20:56:11.070749 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:56:11 crc kubenswrapper[4775]: I1125 20:56:11.070841 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:56:22 crc kubenswrapper[4775]: I1125 20:56:22.848361 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:56:22 crc kubenswrapper[4775]: E1125 20:56:22.849527 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:56:23 crc kubenswrapper[4775]: I1125 20:56:23.848295 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:56:23 crc kubenswrapper[4775]: E1125 20:56:23.849531 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.075132 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xjtj6"] Nov 25 20:56:31 crc kubenswrapper[4775]: E1125 20:56:31.076173 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a3d14d5-abc0-439d-a393-2fd6c21a0d51" containerName="extract-content" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.076268 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a3d14d5-abc0-439d-a393-2fd6c21a0d51" containerName="extract-content" Nov 25 20:56:31 crc kubenswrapper[4775]: E1125 20:56:31.076294 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a3d14d5-abc0-439d-a393-2fd6c21a0d51" containerName="extract-utilities" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.076305 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a3d14d5-abc0-439d-a393-2fd6c21a0d51" containerName="extract-utilities" Nov 25 20:56:31 crc kubenswrapper[4775]: E1125 20:56:31.076323 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a3d14d5-abc0-439d-a393-2fd6c21a0d51" containerName="registry-server" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.076331 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a3d14d5-abc0-439d-a393-2fd6c21a0d51" containerName="registry-server" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.076590 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a3d14d5-abc0-439d-a393-2fd6c21a0d51" containerName="registry-server" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.078299 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.095379 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjtj6"] Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.259593 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/058435d6-8288-4143-9685-582d6c98e51e-utilities\") pod \"certified-operators-xjtj6\" (UID: \"058435d6-8288-4143-9685-582d6c98e51e\") " pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.260335 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td575\" (UniqueName: \"kubernetes.io/projected/058435d6-8288-4143-9685-582d6c98e51e-kube-api-access-td575\") pod \"certified-operators-xjtj6\" (UID: \"058435d6-8288-4143-9685-582d6c98e51e\") " pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.260507 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/058435d6-8288-4143-9685-582d6c98e51e-catalog-content\") pod \"certified-operators-xjtj6\" (UID: \"058435d6-8288-4143-9685-582d6c98e51e\") " pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.362136 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/058435d6-8288-4143-9685-582d6c98e51e-catalog-content\") pod \"certified-operators-xjtj6\" (UID: \"058435d6-8288-4143-9685-582d6c98e51e\") " pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.362245 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/058435d6-8288-4143-9685-582d6c98e51e-utilities\") pod \"certified-operators-xjtj6\" (UID: \"058435d6-8288-4143-9685-582d6c98e51e\") " pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.362360 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td575\" (UniqueName: \"kubernetes.io/projected/058435d6-8288-4143-9685-582d6c98e51e-kube-api-access-td575\") pod \"certified-operators-xjtj6\" (UID: \"058435d6-8288-4143-9685-582d6c98e51e\") " pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.362800 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/058435d6-8288-4143-9685-582d6c98e51e-catalog-content\") pod \"certified-operators-xjtj6\" (UID: \"058435d6-8288-4143-9685-582d6c98e51e\") " pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.362813 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/058435d6-8288-4143-9685-582d6c98e51e-utilities\") pod \"certified-operators-xjtj6\" (UID: \"058435d6-8288-4143-9685-582d6c98e51e\") " pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.395402 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td575\" (UniqueName: \"kubernetes.io/projected/058435d6-8288-4143-9685-582d6c98e51e-kube-api-access-td575\") pod \"certified-operators-xjtj6\" (UID: \"058435d6-8288-4143-9685-582d6c98e51e\") " pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.415041 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:31 crc kubenswrapper[4775]: I1125 20:56:31.761579 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjtj6"] Nov 25 20:56:32 crc kubenswrapper[4775]: I1125 20:56:32.230004 4775 generic.go:334] "Generic (PLEG): container finished" podID="058435d6-8288-4143-9685-582d6c98e51e" containerID="ec11b304f571513e8ec0f2a26c2e05b58db23820e5aa42e38f57081626da8c16" exitCode=0 Nov 25 20:56:32 crc kubenswrapper[4775]: I1125 20:56:32.230102 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjtj6" event={"ID":"058435d6-8288-4143-9685-582d6c98e51e","Type":"ContainerDied","Data":"ec11b304f571513e8ec0f2a26c2e05b58db23820e5aa42e38f57081626da8c16"} Nov 25 20:56:32 crc kubenswrapper[4775]: I1125 20:56:32.230347 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjtj6" event={"ID":"058435d6-8288-4143-9685-582d6c98e51e","Type":"ContainerStarted","Data":"a1c43d18a558ccf04263970d8c07bf54dec0abb538326de3436a8304ae6208d0"} Nov 25 20:56:36 crc kubenswrapper[4775]: I1125 20:56:36.847706 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:56:36 crc kubenswrapper[4775]: I1125 20:56:36.848405 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:56:36 crc kubenswrapper[4775]: E1125 20:56:36.848560 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:56:36 crc kubenswrapper[4775]: E1125 20:56:36.848619 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:56:37 crc kubenswrapper[4775]: E1125 20:56:37.703506 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod058435d6_8288_4143_9685_582d6c98e51e.slice/crio-6c88a3f5286493a4e2cc31cd120d082e55a4873cd8e8e99dc305e4bba8318848.scope\": RecentStats: unable to find data in memory cache]" Nov 25 20:56:38 crc kubenswrapper[4775]: I1125 20:56:38.318280 4775 generic.go:334] "Generic (PLEG): container finished" podID="058435d6-8288-4143-9685-582d6c98e51e" containerID="6c88a3f5286493a4e2cc31cd120d082e55a4873cd8e8e99dc305e4bba8318848" exitCode=0 Nov 25 20:56:38 crc kubenswrapper[4775]: I1125 20:56:38.318559 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjtj6" event={"ID":"058435d6-8288-4143-9685-582d6c98e51e","Type":"ContainerDied","Data":"6c88a3f5286493a4e2cc31cd120d082e55a4873cd8e8e99dc305e4bba8318848"} Nov 25 20:56:39 crc kubenswrapper[4775]: I1125 20:56:39.336360 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjtj6" event={"ID":"058435d6-8288-4143-9685-582d6c98e51e","Type":"ContainerStarted","Data":"a8250385341fbad7182e8bb7691fcaf162127e876e8f8ab2edc7fcbfe74f08df"} Nov 25 20:56:41 crc kubenswrapper[4775]: I1125 20:56:41.070837 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:56:41 crc kubenswrapper[4775]: I1125 20:56:41.071503 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:56:41 crc kubenswrapper[4775]: I1125 20:56:41.416199 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:41 crc kubenswrapper[4775]: I1125 20:56:41.416527 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:41 crc kubenswrapper[4775]: I1125 20:56:41.503478 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:41 crc kubenswrapper[4775]: I1125 20:56:41.551468 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xjtj6" podStartSLOduration=4.063287205 podStartE2EDuration="10.551441352s" podCreationTimestamp="2025-11-25 20:56:31 +0000 UTC" firstStartedPulling="2025-11-25 20:56:32.232415933 +0000 UTC m=+4974.148778339" lastFinishedPulling="2025-11-25 20:56:38.72057012 +0000 UTC m=+4980.636932486" observedRunningTime="2025-11-25 20:56:39.375988811 +0000 UTC m=+4981.292351217" watchObservedRunningTime="2025-11-25 20:56:41.551441352 +0000 UTC m=+4983.467803758" Nov 25 20:56:47 crc kubenswrapper[4775]: I1125 20:56:47.846802 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:56:47 crc kubenswrapper[4775]: E1125 20:56:47.848342 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:56:48 crc kubenswrapper[4775]: I1125 20:56:48.853524 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:56:48 crc kubenswrapper[4775]: E1125 20:56:48.854114 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:56:51 crc kubenswrapper[4775]: I1125 20:56:51.499049 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xjtj6" Nov 25 20:56:51 crc kubenswrapper[4775]: I1125 20:56:51.608321 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjtj6"] Nov 25 20:56:51 crc kubenswrapper[4775]: I1125 20:56:51.653223 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nk42n"] Nov 25 20:56:51 crc kubenswrapper[4775]: I1125 20:56:51.653468 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nk42n" podUID="e5ddc286-2bb2-438f-9a74-6279f0f76753" containerName="registry-server" containerID="cri-o://49a1147bb7869cc5830822435195940de9e0791744209a9a01277e8cd1c356d0" gracePeriod=2 Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.096727 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-24v8q/must-gather-5qzmq"] Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.098528 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-24v8q/must-gather-5qzmq" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.101121 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-24v8q"/"openshift-service-ca.crt" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.101178 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-24v8q"/"kube-root-ca.crt" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.101271 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-24v8q"/"default-dockercfg-wxn77" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.124767 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-24v8q/must-gather-5qzmq"] Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.239716 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk42n" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.263381 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fwms\" (UniqueName: \"kubernetes.io/projected/310ceeb8-f3c1-4fde-8667-8c8f837be80b-kube-api-access-8fwms\") pod \"must-gather-5qzmq\" (UID: \"310ceeb8-f3c1-4fde-8667-8c8f837be80b\") " pod="openshift-must-gather-24v8q/must-gather-5qzmq" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.263490 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/310ceeb8-f3c1-4fde-8667-8c8f837be80b-must-gather-output\") pod \"must-gather-5qzmq\" (UID: \"310ceeb8-f3c1-4fde-8667-8c8f837be80b\") " pod="openshift-must-gather-24v8q/must-gather-5qzmq" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.364482 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fftfk\" (UniqueName: \"kubernetes.io/projected/e5ddc286-2bb2-438f-9a74-6279f0f76753-kube-api-access-fftfk\") pod \"e5ddc286-2bb2-438f-9a74-6279f0f76753\" (UID: \"e5ddc286-2bb2-438f-9a74-6279f0f76753\") " Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.364703 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ddc286-2bb2-438f-9a74-6279f0f76753-utilities\") pod \"e5ddc286-2bb2-438f-9a74-6279f0f76753\" (UID: \"e5ddc286-2bb2-438f-9a74-6279f0f76753\") " Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.364842 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ddc286-2bb2-438f-9a74-6279f0f76753-catalog-content\") pod \"e5ddc286-2bb2-438f-9a74-6279f0f76753\" (UID: \"e5ddc286-2bb2-438f-9a74-6279f0f76753\") " Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.365115 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fwms\" (UniqueName: \"kubernetes.io/projected/310ceeb8-f3c1-4fde-8667-8c8f837be80b-kube-api-access-8fwms\") pod \"must-gather-5qzmq\" (UID: \"310ceeb8-f3c1-4fde-8667-8c8f837be80b\") " pod="openshift-must-gather-24v8q/must-gather-5qzmq" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.365206 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/310ceeb8-f3c1-4fde-8667-8c8f837be80b-must-gather-output\") pod \"must-gather-5qzmq\" (UID: \"310ceeb8-f3c1-4fde-8667-8c8f837be80b\") " pod="openshift-must-gather-24v8q/must-gather-5qzmq" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.365242 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5ddc286-2bb2-438f-9a74-6279f0f76753-utilities" (OuterVolumeSpecName: "utilities") pod "e5ddc286-2bb2-438f-9a74-6279f0f76753" (UID: "e5ddc286-2bb2-438f-9a74-6279f0f76753"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.365706 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ddc286-2bb2-438f-9a74-6279f0f76753-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.365774 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/310ceeb8-f3c1-4fde-8667-8c8f837be80b-must-gather-output\") pod \"must-gather-5qzmq\" (UID: \"310ceeb8-f3c1-4fde-8667-8c8f837be80b\") " pod="openshift-must-gather-24v8q/must-gather-5qzmq" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.370045 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ddc286-2bb2-438f-9a74-6279f0f76753-kube-api-access-fftfk" (OuterVolumeSpecName: "kube-api-access-fftfk") pod "e5ddc286-2bb2-438f-9a74-6279f0f76753" (UID: "e5ddc286-2bb2-438f-9a74-6279f0f76753"). InnerVolumeSpecName "kube-api-access-fftfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.384762 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fwms\" (UniqueName: \"kubernetes.io/projected/310ceeb8-f3c1-4fde-8667-8c8f837be80b-kube-api-access-8fwms\") pod \"must-gather-5qzmq\" (UID: \"310ceeb8-f3c1-4fde-8667-8c8f837be80b\") " pod="openshift-must-gather-24v8q/must-gather-5qzmq" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.413211 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5ddc286-2bb2-438f-9a74-6279f0f76753-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5ddc286-2bb2-438f-9a74-6279f0f76753" (UID: "e5ddc286-2bb2-438f-9a74-6279f0f76753"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.423208 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-24v8q/must-gather-5qzmq" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.466946 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fftfk\" (UniqueName: \"kubernetes.io/projected/e5ddc286-2bb2-438f-9a74-6279f0f76753-kube-api-access-fftfk\") on node \"crc\" DevicePath \"\"" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.467261 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ddc286-2bb2-438f-9a74-6279f0f76753-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.518921 4775 generic.go:334] "Generic (PLEG): container finished" podID="e5ddc286-2bb2-438f-9a74-6279f0f76753" containerID="49a1147bb7869cc5830822435195940de9e0791744209a9a01277e8cd1c356d0" exitCode=0 Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.519796 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk42n" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.522690 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk42n" event={"ID":"e5ddc286-2bb2-438f-9a74-6279f0f76753","Type":"ContainerDied","Data":"49a1147bb7869cc5830822435195940de9e0791744209a9a01277e8cd1c356d0"} Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.522722 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk42n" event={"ID":"e5ddc286-2bb2-438f-9a74-6279f0f76753","Type":"ContainerDied","Data":"61ea9db72acc91ae897fd0ffc991a4353c576e2a330d63f4564f62f31f86e929"} Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.522739 4775 scope.go:117] "RemoveContainer" containerID="49a1147bb7869cc5830822435195940de9e0791744209a9a01277e8cd1c356d0" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.559039 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nk42n"] Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.564523 4775 scope.go:117] "RemoveContainer" containerID="43212c85c8ff5b1c213ff766cb58697a705eab26719e8f1147167aa5a7e80da4" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.574399 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nk42n"] Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.621142 4775 scope.go:117] "RemoveContainer" containerID="67d5bab39790c2ecaaa7c7efb294f4553372bbcd5de6025ac7bad976679f5b5c" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.662879 4775 scope.go:117] "RemoveContainer" containerID="49a1147bb7869cc5830822435195940de9e0791744209a9a01277e8cd1c356d0" Nov 25 20:56:52 crc kubenswrapper[4775]: E1125 20:56:52.663354 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49a1147bb7869cc5830822435195940de9e0791744209a9a01277e8cd1c356d0\": container with ID starting with 49a1147bb7869cc5830822435195940de9e0791744209a9a01277e8cd1c356d0 not found: ID does not exist" containerID="49a1147bb7869cc5830822435195940de9e0791744209a9a01277e8cd1c356d0" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.663397 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a1147bb7869cc5830822435195940de9e0791744209a9a01277e8cd1c356d0"} err="failed to get container status \"49a1147bb7869cc5830822435195940de9e0791744209a9a01277e8cd1c356d0\": rpc error: code = NotFound desc = could not find container \"49a1147bb7869cc5830822435195940de9e0791744209a9a01277e8cd1c356d0\": container with ID starting with 49a1147bb7869cc5830822435195940de9e0791744209a9a01277e8cd1c356d0 not found: ID does not exist" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.663422 4775 scope.go:117] "RemoveContainer" containerID="43212c85c8ff5b1c213ff766cb58697a705eab26719e8f1147167aa5a7e80da4" Nov 25 20:56:52 crc kubenswrapper[4775]: E1125 20:56:52.663806 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43212c85c8ff5b1c213ff766cb58697a705eab26719e8f1147167aa5a7e80da4\": container with ID starting with 43212c85c8ff5b1c213ff766cb58697a705eab26719e8f1147167aa5a7e80da4 not found: ID does not exist" containerID="43212c85c8ff5b1c213ff766cb58697a705eab26719e8f1147167aa5a7e80da4" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.663877 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43212c85c8ff5b1c213ff766cb58697a705eab26719e8f1147167aa5a7e80da4"} err="failed to get container status \"43212c85c8ff5b1c213ff766cb58697a705eab26719e8f1147167aa5a7e80da4\": rpc error: code = NotFound desc = could not find container \"43212c85c8ff5b1c213ff766cb58697a705eab26719e8f1147167aa5a7e80da4\": container with ID starting with 43212c85c8ff5b1c213ff766cb58697a705eab26719e8f1147167aa5a7e80da4 not found: ID does not exist" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.663942 4775 scope.go:117] "RemoveContainer" containerID="67d5bab39790c2ecaaa7c7efb294f4553372bbcd5de6025ac7bad976679f5b5c" Nov 25 20:56:52 crc kubenswrapper[4775]: E1125 20:56:52.664237 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67d5bab39790c2ecaaa7c7efb294f4553372bbcd5de6025ac7bad976679f5b5c\": container with ID starting with 67d5bab39790c2ecaaa7c7efb294f4553372bbcd5de6025ac7bad976679f5b5c not found: ID does not exist" containerID="67d5bab39790c2ecaaa7c7efb294f4553372bbcd5de6025ac7bad976679f5b5c" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.664263 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d5bab39790c2ecaaa7c7efb294f4553372bbcd5de6025ac7bad976679f5b5c"} err="failed to get container status \"67d5bab39790c2ecaaa7c7efb294f4553372bbcd5de6025ac7bad976679f5b5c\": rpc error: code = NotFound desc = could not find container \"67d5bab39790c2ecaaa7c7efb294f4553372bbcd5de6025ac7bad976679f5b5c\": container with ID starting with 67d5bab39790c2ecaaa7c7efb294f4553372bbcd5de6025ac7bad976679f5b5c not found: ID does not exist" Nov 25 20:56:52 crc kubenswrapper[4775]: I1125 20:56:52.856405 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5ddc286-2bb2-438f-9a74-6279f0f76753" path="/var/lib/kubelet/pods/e5ddc286-2bb2-438f-9a74-6279f0f76753/volumes" Nov 25 20:56:53 crc kubenswrapper[4775]: I1125 20:56:53.046386 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-24v8q/must-gather-5qzmq"] Nov 25 20:56:53 crc kubenswrapper[4775]: I1125 20:56:53.529349 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-24v8q/must-gather-5qzmq" event={"ID":"310ceeb8-f3c1-4fde-8667-8c8f837be80b","Type":"ContainerStarted","Data":"326428cd56956f7c0dffda66f3a88f4e5e54a7794d44aa41f7fca7481a23cbb8"} Nov 25 20:56:58 crc kubenswrapper[4775]: I1125 20:56:58.575333 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-24v8q/must-gather-5qzmq" event={"ID":"310ceeb8-f3c1-4fde-8667-8c8f837be80b","Type":"ContainerStarted","Data":"5423c0197716a5e76ab9d83fc6058ddb1c39d34bcb48c7a18da19ff9289fd1e7"} Nov 25 20:56:58 crc kubenswrapper[4775]: I1125 20:56:58.575808 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-24v8q/must-gather-5qzmq" event={"ID":"310ceeb8-f3c1-4fde-8667-8c8f837be80b","Type":"ContainerStarted","Data":"0c4f3c7bf5238737bc73c7c049023eefe0f8b562f202efbc7ccf3d02021b8c95"} Nov 25 20:56:58 crc kubenswrapper[4775]: I1125 20:56:58.601830 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-24v8q/must-gather-5qzmq" podStartSLOduration=2.462389627 podStartE2EDuration="6.601812239s" podCreationTimestamp="2025-11-25 20:56:52 +0000 UTC" firstStartedPulling="2025-11-25 20:56:53.056574097 +0000 UTC m=+4994.972936463" lastFinishedPulling="2025-11-25 20:56:57.195996669 +0000 UTC m=+4999.112359075" observedRunningTime="2025-11-25 20:56:58.589195468 +0000 UTC m=+5000.505557844" watchObservedRunningTime="2025-11-25 20:56:58.601812239 +0000 UTC m=+5000.518174605" Nov 25 20:56:59 crc kubenswrapper[4775]: I1125 20:56:59.847420 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:56:59 crc kubenswrapper[4775]: E1125 20:56:59.847994 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:57:02 crc kubenswrapper[4775]: I1125 20:57:02.847391 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:57:02 crc kubenswrapper[4775]: E1125 20:57:02.848338 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:57:02 crc kubenswrapper[4775]: I1125 20:57:02.942143 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-24v8q/crc-debug-bbxqg"] Nov 25 20:57:02 crc kubenswrapper[4775]: E1125 20:57:02.942563 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ddc286-2bb2-438f-9a74-6279f0f76753" containerName="extract-content" Nov 25 20:57:02 crc kubenswrapper[4775]: I1125 20:57:02.942587 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ddc286-2bb2-438f-9a74-6279f0f76753" containerName="extract-content" Nov 25 20:57:02 crc kubenswrapper[4775]: E1125 20:57:02.942701 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ddc286-2bb2-438f-9a74-6279f0f76753" containerName="extract-utilities" Nov 25 20:57:02 crc kubenswrapper[4775]: I1125 20:57:02.942724 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ddc286-2bb2-438f-9a74-6279f0f76753" containerName="extract-utilities" Nov 25 20:57:02 crc kubenswrapper[4775]: E1125 20:57:02.942755 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ddc286-2bb2-438f-9a74-6279f0f76753" containerName="registry-server" Nov 25 20:57:02 crc kubenswrapper[4775]: I1125 20:57:02.942764 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ddc286-2bb2-438f-9a74-6279f0f76753" containerName="registry-server" Nov 25 20:57:02 crc kubenswrapper[4775]: I1125 20:57:02.943014 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ddc286-2bb2-438f-9a74-6279f0f76753" containerName="registry-server" Nov 25 20:57:02 crc kubenswrapper[4775]: I1125 20:57:02.944116 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-24v8q/crc-debug-bbxqg" Nov 25 20:57:02 crc kubenswrapper[4775]: I1125 20:57:02.984724 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mndpt\" (UniqueName: \"kubernetes.io/projected/4a95d7d5-024c-4461-a199-48ec471c99ec-kube-api-access-mndpt\") pod \"crc-debug-bbxqg\" (UID: \"4a95d7d5-024c-4461-a199-48ec471c99ec\") " pod="openshift-must-gather-24v8q/crc-debug-bbxqg" Nov 25 20:57:02 crc kubenswrapper[4775]: I1125 20:57:02.984980 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a95d7d5-024c-4461-a199-48ec471c99ec-host\") pod \"crc-debug-bbxqg\" (UID: \"4a95d7d5-024c-4461-a199-48ec471c99ec\") " pod="openshift-must-gather-24v8q/crc-debug-bbxqg" Nov 25 20:57:03 crc kubenswrapper[4775]: I1125 20:57:03.086614 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mndpt\" (UniqueName: \"kubernetes.io/projected/4a95d7d5-024c-4461-a199-48ec471c99ec-kube-api-access-mndpt\") pod \"crc-debug-bbxqg\" (UID: \"4a95d7d5-024c-4461-a199-48ec471c99ec\") " pod="openshift-must-gather-24v8q/crc-debug-bbxqg" Nov 25 20:57:03 crc kubenswrapper[4775]: I1125 20:57:03.086670 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a95d7d5-024c-4461-a199-48ec471c99ec-host\") pod \"crc-debug-bbxqg\" (UID: \"4a95d7d5-024c-4461-a199-48ec471c99ec\") " pod="openshift-must-gather-24v8q/crc-debug-bbxqg" Nov 25 20:57:03 crc kubenswrapper[4775]: I1125 20:57:03.086800 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a95d7d5-024c-4461-a199-48ec471c99ec-host\") pod \"crc-debug-bbxqg\" (UID: \"4a95d7d5-024c-4461-a199-48ec471c99ec\") " pod="openshift-must-gather-24v8q/crc-debug-bbxqg" Nov 25 20:57:03 crc kubenswrapper[4775]: I1125 20:57:03.104389 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mndpt\" (UniqueName: \"kubernetes.io/projected/4a95d7d5-024c-4461-a199-48ec471c99ec-kube-api-access-mndpt\") pod \"crc-debug-bbxqg\" (UID: \"4a95d7d5-024c-4461-a199-48ec471c99ec\") " pod="openshift-must-gather-24v8q/crc-debug-bbxqg" Nov 25 20:57:03 crc kubenswrapper[4775]: I1125 20:57:03.264203 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-24v8q/crc-debug-bbxqg" Nov 25 20:57:03 crc kubenswrapper[4775]: I1125 20:57:03.625347 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-24v8q/crc-debug-bbxqg" event={"ID":"4a95d7d5-024c-4461-a199-48ec471c99ec","Type":"ContainerStarted","Data":"c4c5090e1a04a974c14ddfba6a3de5b6dfb81b1cf50319cce6e063d9b644e8f6"} Nov 25 20:57:11 crc kubenswrapper[4775]: I1125 20:57:11.070915 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:57:11 crc kubenswrapper[4775]: I1125 20:57:11.071493 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:57:11 crc kubenswrapper[4775]: I1125 20:57:11.071537 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 20:57:11 crc kubenswrapper[4775]: I1125 20:57:11.072315 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:57:11 crc kubenswrapper[4775]: I1125 20:57:11.072360 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" gracePeriod=600 Nov 25 20:57:11 crc kubenswrapper[4775]: I1125 20:57:11.699573 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" exitCode=0 Nov 25 20:57:11 crc kubenswrapper[4775]: I1125 20:57:11.699631 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886"} Nov 25 20:57:11 crc kubenswrapper[4775]: I1125 20:57:11.699757 4775 scope.go:117] "RemoveContainer" containerID="96a31c04b17396a85beeeb573cd247c8b10f0b0302f5a0e3481eb43aa4c1f294" Nov 25 20:57:13 crc kubenswrapper[4775]: E1125 20:57:13.192774 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:57:13 crc kubenswrapper[4775]: I1125 20:57:13.729192 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 20:57:13 crc kubenswrapper[4775]: E1125 20:57:13.729830 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:57:13 crc kubenswrapper[4775]: I1125 20:57:13.730550 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-24v8q/crc-debug-bbxqg" event={"ID":"4a95d7d5-024c-4461-a199-48ec471c99ec","Type":"ContainerStarted","Data":"636067f49a95fdd2cb7f882b5436803b7056fa407ad401b36b891fdab81526e9"} Nov 25 20:57:13 crc kubenswrapper[4775]: I1125 20:57:13.848469 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:57:13 crc kubenswrapper[4775]: E1125 20:57:13.848888 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:57:17 crc kubenswrapper[4775]: I1125 20:57:17.846964 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:57:17 crc kubenswrapper[4775]: E1125 20:57:17.847775 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:57:26 crc kubenswrapper[4775]: I1125 20:57:26.847784 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 20:57:26 crc kubenswrapper[4775]: E1125 20:57:26.848680 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:57:28 crc kubenswrapper[4775]: I1125 20:57:28.856408 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:57:28 crc kubenswrapper[4775]: I1125 20:57:28.857154 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:57:28 crc kubenswrapper[4775]: E1125 20:57:28.857224 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:57:28 crc kubenswrapper[4775]: E1125 20:57:28.857350 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:57:33 crc kubenswrapper[4775]: I1125 20:57:33.904215 4775 generic.go:334] "Generic (PLEG): container finished" podID="4a95d7d5-024c-4461-a199-48ec471c99ec" containerID="636067f49a95fdd2cb7f882b5436803b7056fa407ad401b36b891fdab81526e9" exitCode=0 Nov 25 20:57:33 crc kubenswrapper[4775]: I1125 20:57:33.904369 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-24v8q/crc-debug-bbxqg" event={"ID":"4a95d7d5-024c-4461-a199-48ec471c99ec","Type":"ContainerDied","Data":"636067f49a95fdd2cb7f882b5436803b7056fa407ad401b36b891fdab81526e9"} Nov 25 20:57:35 crc kubenswrapper[4775]: I1125 20:57:35.039281 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-24v8q/crc-debug-bbxqg" Nov 25 20:57:35 crc kubenswrapper[4775]: I1125 20:57:35.069617 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-24v8q/crc-debug-bbxqg"] Nov 25 20:57:35 crc kubenswrapper[4775]: I1125 20:57:35.077801 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-24v8q/crc-debug-bbxqg"] Nov 25 20:57:35 crc kubenswrapper[4775]: I1125 20:57:35.236254 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a95d7d5-024c-4461-a199-48ec471c99ec-host\") pod \"4a95d7d5-024c-4461-a199-48ec471c99ec\" (UID: \"4a95d7d5-024c-4461-a199-48ec471c99ec\") " Nov 25 20:57:35 crc kubenswrapper[4775]: I1125 20:57:35.236367 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mndpt\" (UniqueName: \"kubernetes.io/projected/4a95d7d5-024c-4461-a199-48ec471c99ec-kube-api-access-mndpt\") pod \"4a95d7d5-024c-4461-a199-48ec471c99ec\" (UID: \"4a95d7d5-024c-4461-a199-48ec471c99ec\") " Nov 25 20:57:35 crc kubenswrapper[4775]: I1125 20:57:35.236427 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a95d7d5-024c-4461-a199-48ec471c99ec-host" (OuterVolumeSpecName: "host") pod "4a95d7d5-024c-4461-a199-48ec471c99ec" (UID: "4a95d7d5-024c-4461-a199-48ec471c99ec"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:57:35 crc kubenswrapper[4775]: I1125 20:57:35.236775 4775 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a95d7d5-024c-4461-a199-48ec471c99ec-host\") on node \"crc\" DevicePath \"\"" Nov 25 20:57:35 crc kubenswrapper[4775]: I1125 20:57:35.243318 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a95d7d5-024c-4461-a199-48ec471c99ec-kube-api-access-mndpt" (OuterVolumeSpecName: "kube-api-access-mndpt") pod "4a95d7d5-024c-4461-a199-48ec471c99ec" (UID: "4a95d7d5-024c-4461-a199-48ec471c99ec"). InnerVolumeSpecName "kube-api-access-mndpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:57:35 crc kubenswrapper[4775]: I1125 20:57:35.338915 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mndpt\" (UniqueName: \"kubernetes.io/projected/4a95d7d5-024c-4461-a199-48ec471c99ec-kube-api-access-mndpt\") on node \"crc\" DevicePath \"\"" Nov 25 20:57:35 crc kubenswrapper[4775]: I1125 20:57:35.939309 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4c5090e1a04a974c14ddfba6a3de5b6dfb81b1cf50319cce6e063d9b644e8f6" Nov 25 20:57:35 crc kubenswrapper[4775]: I1125 20:57:35.939351 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-24v8q/crc-debug-bbxqg" Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.269505 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-24v8q/crc-debug-m65hj"] Nov 25 20:57:36 crc kubenswrapper[4775]: E1125 20:57:36.269950 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a95d7d5-024c-4461-a199-48ec471c99ec" containerName="container-00" Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.269967 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a95d7d5-024c-4461-a199-48ec471c99ec" containerName="container-00" Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.270126 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a95d7d5-024c-4461-a199-48ec471c99ec" containerName="container-00" Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.270821 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-24v8q/crc-debug-m65hj" Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.362517 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrlcj\" (UniqueName: \"kubernetes.io/projected/c52e7cb1-3615-4cf9-8ee2-ead56489b5fa-kube-api-access-rrlcj\") pod \"crc-debug-m65hj\" (UID: \"c52e7cb1-3615-4cf9-8ee2-ead56489b5fa\") " pod="openshift-must-gather-24v8q/crc-debug-m65hj" Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.362706 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c52e7cb1-3615-4cf9-8ee2-ead56489b5fa-host\") pod \"crc-debug-m65hj\" (UID: \"c52e7cb1-3615-4cf9-8ee2-ead56489b5fa\") " pod="openshift-must-gather-24v8q/crc-debug-m65hj" Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.464582 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrlcj\" (UniqueName: \"kubernetes.io/projected/c52e7cb1-3615-4cf9-8ee2-ead56489b5fa-kube-api-access-rrlcj\") pod \"crc-debug-m65hj\" (UID: \"c52e7cb1-3615-4cf9-8ee2-ead56489b5fa\") " pod="openshift-must-gather-24v8q/crc-debug-m65hj" Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.464913 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c52e7cb1-3615-4cf9-8ee2-ead56489b5fa-host\") pod \"crc-debug-m65hj\" (UID: \"c52e7cb1-3615-4cf9-8ee2-ead56489b5fa\") " pod="openshift-must-gather-24v8q/crc-debug-m65hj" Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.465095 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c52e7cb1-3615-4cf9-8ee2-ead56489b5fa-host\") pod \"crc-debug-m65hj\" (UID: \"c52e7cb1-3615-4cf9-8ee2-ead56489b5fa\") " pod="openshift-must-gather-24v8q/crc-debug-m65hj" Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.491241 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrlcj\" (UniqueName: \"kubernetes.io/projected/c52e7cb1-3615-4cf9-8ee2-ead56489b5fa-kube-api-access-rrlcj\") pod \"crc-debug-m65hj\" (UID: \"c52e7cb1-3615-4cf9-8ee2-ead56489b5fa\") " pod="openshift-must-gather-24v8q/crc-debug-m65hj" Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.598395 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-24v8q/crc-debug-m65hj" Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.864027 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a95d7d5-024c-4461-a199-48ec471c99ec" path="/var/lib/kubelet/pods/4a95d7d5-024c-4461-a199-48ec471c99ec/volumes" Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.949847 4775 generic.go:334] "Generic (PLEG): container finished" podID="c52e7cb1-3615-4cf9-8ee2-ead56489b5fa" containerID="be9b26c9898e2f269c74136ce8249187cfb6d22c52212a28ebc223384be1d602" exitCode=1 Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.949956 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-24v8q/crc-debug-m65hj" event={"ID":"c52e7cb1-3615-4cf9-8ee2-ead56489b5fa","Type":"ContainerDied","Data":"be9b26c9898e2f269c74136ce8249187cfb6d22c52212a28ebc223384be1d602"} Nov 25 20:57:36 crc kubenswrapper[4775]: I1125 20:57:36.950305 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-24v8q/crc-debug-m65hj" event={"ID":"c52e7cb1-3615-4cf9-8ee2-ead56489b5fa","Type":"ContainerStarted","Data":"5e81c3b1ade7b771a50ad7e10de74dc87bed127eab6a67cf818ce47d4730846b"} Nov 25 20:57:37 crc kubenswrapper[4775]: I1125 20:57:37.000822 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-24v8q/crc-debug-m65hj"] Nov 25 20:57:37 crc kubenswrapper[4775]: I1125 20:57:37.012354 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-24v8q/crc-debug-m65hj"] Nov 25 20:57:38 crc kubenswrapper[4775]: I1125 20:57:38.086523 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-24v8q/crc-debug-m65hj" Nov 25 20:57:38 crc kubenswrapper[4775]: I1125 20:57:38.098896 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c52e7cb1-3615-4cf9-8ee2-ead56489b5fa-host\") pod \"c52e7cb1-3615-4cf9-8ee2-ead56489b5fa\" (UID: \"c52e7cb1-3615-4cf9-8ee2-ead56489b5fa\") " Nov 25 20:57:38 crc kubenswrapper[4775]: I1125 20:57:38.098995 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrlcj\" (UniqueName: \"kubernetes.io/projected/c52e7cb1-3615-4cf9-8ee2-ead56489b5fa-kube-api-access-rrlcj\") pod \"c52e7cb1-3615-4cf9-8ee2-ead56489b5fa\" (UID: \"c52e7cb1-3615-4cf9-8ee2-ead56489b5fa\") " Nov 25 20:57:38 crc kubenswrapper[4775]: I1125 20:57:38.099050 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c52e7cb1-3615-4cf9-8ee2-ead56489b5fa-host" (OuterVolumeSpecName: "host") pod "c52e7cb1-3615-4cf9-8ee2-ead56489b5fa" (UID: "c52e7cb1-3615-4cf9-8ee2-ead56489b5fa"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:57:38 crc kubenswrapper[4775]: I1125 20:57:38.099423 4775 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c52e7cb1-3615-4cf9-8ee2-ead56489b5fa-host\") on node \"crc\" DevicePath \"\"" Nov 25 20:57:38 crc kubenswrapper[4775]: I1125 20:57:38.104615 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c52e7cb1-3615-4cf9-8ee2-ead56489b5fa-kube-api-access-rrlcj" (OuterVolumeSpecName: "kube-api-access-rrlcj") pod "c52e7cb1-3615-4cf9-8ee2-ead56489b5fa" (UID: "c52e7cb1-3615-4cf9-8ee2-ead56489b5fa"). InnerVolumeSpecName "kube-api-access-rrlcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:57:38 crc kubenswrapper[4775]: I1125 20:57:38.200884 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrlcj\" (UniqueName: \"kubernetes.io/projected/c52e7cb1-3615-4cf9-8ee2-ead56489b5fa-kube-api-access-rrlcj\") on node \"crc\" DevicePath \"\"" Nov 25 20:57:38 crc kubenswrapper[4775]: I1125 20:57:38.861569 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c52e7cb1-3615-4cf9-8ee2-ead56489b5fa" path="/var/lib/kubelet/pods/c52e7cb1-3615-4cf9-8ee2-ead56489b5fa/volumes" Nov 25 20:57:38 crc kubenswrapper[4775]: I1125 20:57:38.970311 4775 scope.go:117] "RemoveContainer" containerID="be9b26c9898e2f269c74136ce8249187cfb6d22c52212a28ebc223384be1d602" Nov 25 20:57:38 crc kubenswrapper[4775]: I1125 20:57:38.970360 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-24v8q/crc-debug-m65hj" Nov 25 20:57:40 crc kubenswrapper[4775]: I1125 20:57:40.847835 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:57:40 crc kubenswrapper[4775]: I1125 20:57:40.848871 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 20:57:40 crc kubenswrapper[4775]: E1125 20:57:40.849032 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:57:40 crc kubenswrapper[4775]: E1125 20:57:40.849317 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:57:42 crc kubenswrapper[4775]: I1125 20:57:42.848801 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:57:42 crc kubenswrapper[4775]: E1125 20:57:42.849297 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:57:52 crc kubenswrapper[4775]: I1125 20:57:52.848437 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 20:57:52 crc kubenswrapper[4775]: I1125 20:57:52.848872 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:57:52 crc kubenswrapper[4775]: E1125 20:57:52.848973 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:57:52 crc kubenswrapper[4775]: E1125 20:57:52.849103 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:57:57 crc kubenswrapper[4775]: I1125 20:57:57.847746 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:57:57 crc kubenswrapper[4775]: E1125 20:57:57.849106 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:58:06 crc kubenswrapper[4775]: I1125 20:58:06.847724 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:58:06 crc kubenswrapper[4775]: E1125 20:58:06.848685 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:58:06 crc kubenswrapper[4775]: I1125 20:58:06.848734 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 20:58:06 crc kubenswrapper[4775]: E1125 20:58:06.849080 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:58:09 crc kubenswrapper[4775]: I1125 20:58:09.846528 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:58:11 crc kubenswrapper[4775]: I1125 20:58:11.307387 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d"} Nov 25 20:58:13 crc kubenswrapper[4775]: I1125 20:58:13.105330 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:58:13 crc kubenswrapper[4775]: I1125 20:58:13.329606 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" exitCode=1 Nov 25 20:58:13 crc kubenswrapper[4775]: I1125 20:58:13.329689 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerDied","Data":"dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d"} Nov 25 20:58:13 crc kubenswrapper[4775]: I1125 20:58:13.330172 4775 scope.go:117] "RemoveContainer" containerID="68b347485b3600fea760f3df962d18be5d922093477fee0bc8e5821e49c9654d" Nov 25 20:58:13 crc kubenswrapper[4775]: I1125 20:58:13.330823 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 20:58:13 crc kubenswrapper[4775]: E1125 20:58:13.331085 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:58:19 crc kubenswrapper[4775]: I1125 20:58:19.847891 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 20:58:19 crc kubenswrapper[4775]: I1125 20:58:19.848468 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:58:19 crc kubenswrapper[4775]: E1125 20:58:19.848578 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:58:19 crc kubenswrapper[4775]: E1125 20:58:19.848704 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:58:23 crc kubenswrapper[4775]: I1125 20:58:23.104784 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:58:23 crc kubenswrapper[4775]: I1125 20:58:23.105478 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 20:58:23 crc kubenswrapper[4775]: I1125 20:58:23.106336 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 20:58:23 crc kubenswrapper[4775]: E1125 20:58:23.106753 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:58:31 crc kubenswrapper[4775]: I1125 20:58:31.846515 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:58:31 crc kubenswrapper[4775]: E1125 20:58:31.847141 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:58:32 crc kubenswrapper[4775]: I1125 20:58:32.329781 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7444d7c94-bsxlv_8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade/barbican-api/0.log" Nov 25 20:58:32 crc kubenswrapper[4775]: I1125 20:58:32.499822 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7444d7c94-bsxlv_8ba477cb-3f3e-41e2-9ca3-fe3c94fbdade/barbican-api-log/0.log" Nov 25 20:58:32 crc kubenswrapper[4775]: I1125 20:58:32.537881 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54587fd766-l4dln_f4832609-d922-4c24-9b69-a9fbd2de6c86/barbican-keystone-listener/0.log" Nov 25 20:58:32 crc kubenswrapper[4775]: I1125 20:58:32.613148 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54587fd766-l4dln_f4832609-d922-4c24-9b69-a9fbd2de6c86/barbican-keystone-listener-log/0.log" Nov 25 20:58:32 crc kubenswrapper[4775]: I1125 20:58:32.728084 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5bcd58f99f-7glc6_520e459e-76e8-4e4b-8e81-3eacb6bfe1c8/barbican-worker/0.log" Nov 25 20:58:32 crc kubenswrapper[4775]: I1125 20:58:32.757759 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5bcd58f99f-7glc6_520e459e-76e8-4e4b-8e81-3eacb6bfe1c8/barbican-worker-log/0.log" Nov 25 20:58:32 crc kubenswrapper[4775]: I1125 20:58:32.950101 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-cspz8_8fd008ce-79f9-4041-ad04-856eba5e0536/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:32 crc kubenswrapper[4775]: I1125 20:58:32.971001 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_aba372ee-dd64-4cc1-a19d-d7f5e0bd0713/ceilometer-central-agent/0.log" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.041561 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_aba372ee-dd64-4cc1-a19d-d7f5e0bd0713/ceilometer-notification-agent/0.log" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.104565 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_aba372ee-dd64-4cc1-a19d-d7f5e0bd0713/sg-core/0.log" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.140777 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_aba372ee-dd64-4cc1-a19d-d7f5e0bd0713/proxy-httpd/0.log" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.235534 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-cttgs_e24bbfc2-37c0-4052-95af-f338b0872857/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.307674 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-vx2rh_cafc818a-081e-48dd-ae98-001a1c00b074/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.487064 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_67cc10a4-ce8a-4820-ab5d-747e28306ee9/cinder-api/0.log" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.533300 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_67cc10a4-ce8a-4820-ab5d-747e28306ee9/cinder-api-log/0.log" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.713972 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_0119f9f6-2582-47b7-a5a0-cfd393da9234/probe/0.log" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.738893 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_0119f9f6-2582-47b7-a5a0-cfd393da9234/cinder-backup/0.log" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.793875 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7045afb5-9467-4dd9-8d42-257ef82590d2/cinder-scheduler/0.log" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.846758 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.847152 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 20:58:33 crc kubenswrapper[4775]: E1125 20:58:33.847432 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:58:33 crc kubenswrapper[4775]: E1125 20:58:33.847476 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.953142 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7045afb5-9467-4dd9-8d42-257ef82590d2/probe/0.log" Nov 25 20:58:33 crc kubenswrapper[4775]: I1125 20:58:33.953172 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_c4638194-7edd-4cd4-bf51-e044eb343d94/cinder-volume/0.log" Nov 25 20:58:34 crc kubenswrapper[4775]: I1125 20:58:34.021257 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_c4638194-7edd-4cd4-bf51-e044eb343d94/probe/0.log" Nov 25 20:58:34 crc kubenswrapper[4775]: I1125 20:58:34.138022 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8bjjl_9e54b6d7-5c5a-498c-868e-e7a35b93b448/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:34 crc kubenswrapper[4775]: I1125 20:58:34.239344 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-h2qmv_ed47c3bd-5136-4d5a-946b-924498853472/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:34 crc kubenswrapper[4775]: I1125 20:58:34.348136 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-gthzc_49613d54-e600-4168-8782-66c3fef8b983/init/0.log" Nov 25 20:58:34 crc kubenswrapper[4775]: I1125 20:58:34.481365 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-gthzc_49613d54-e600-4168-8782-66c3fef8b983/init/0.log" Nov 25 20:58:34 crc kubenswrapper[4775]: I1125 20:58:34.583143 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9ff9ed95-35ed-4232-b848-8d85332bcb8f/glance-httpd/0.log" Nov 25 20:58:34 crc kubenswrapper[4775]: I1125 20:58:34.587136 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-gthzc_49613d54-e600-4168-8782-66c3fef8b983/dnsmasq-dns/0.log" Nov 25 20:58:34 crc kubenswrapper[4775]: I1125 20:58:34.699646 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9ff9ed95-35ed-4232-b848-8d85332bcb8f/glance-log/0.log" Nov 25 20:58:34 crc kubenswrapper[4775]: I1125 20:58:34.766129 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4fc4e051-af7c-469f-945c-46162d1aceb2/glance-httpd/0.log" Nov 25 20:58:34 crc kubenswrapper[4775]: I1125 20:58:34.793127 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4fc4e051-af7c-469f-945c-46162d1aceb2/glance-log/0.log" Nov 25 20:58:35 crc kubenswrapper[4775]: I1125 20:58:35.051621 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77ddd59696-rlw9m_d6f1f978-0027-4119-8469-5acf67c75746/horizon/0.log" Nov 25 20:58:35 crc kubenswrapper[4775]: I1125 20:58:35.265761 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77ddd59696-rlw9m_d6f1f978-0027-4119-8469-5acf67c75746/horizon-log/0.log" Nov 25 20:58:35 crc kubenswrapper[4775]: I1125 20:58:35.463343 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-84l2x_57e1b5b2-df3e-4dea-a00d-be8f1c4cf7c5/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:35 crc kubenswrapper[4775]: I1125 20:58:35.523980 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-7qhz6_0b0b1001-6bd7-4db7-817b-dcb453399b78/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:35 crc kubenswrapper[4775]: I1125 20:58:35.698244 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401681-rzgcc_050bd532-dae0-46ac-93f0-096c75d4c0a6/keystone-cron/0.log" Nov 25 20:58:35 crc kubenswrapper[4775]: I1125 20:58:35.774941 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-857879b544-hkmfq_285f1de5-4e45-4b02-9ed9-b70b68f6b68d/keystone-api/0.log" Nov 25 20:58:35 crc kubenswrapper[4775]: I1125 20:58:35.910316 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7a38d394-4c5f-4d60-92af-3407e58769da/kube-state-metrics/0.log" Nov 25 20:58:35 crc kubenswrapper[4775]: I1125 20:58:35.989491 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8xqln_c7dcb097-31de-4297-9671-ac2644323c39/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:36 crc kubenswrapper[4775]: I1125 20:58:36.073857 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_a18f9ccb-ee60-48c8-9fe2-5a505036b958/manila-api/13.log" Nov 25 20:58:36 crc kubenswrapper[4775]: I1125 20:58:36.104561 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_a18f9ccb-ee60-48c8-9fe2-5a505036b958/manila-api/13.log" Nov 25 20:58:36 crc kubenswrapper[4775]: I1125 20:58:36.161505 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_a18f9ccb-ee60-48c8-9fe2-5a505036b958/manila-api-log/0.log" Nov 25 20:58:36 crc kubenswrapper[4775]: I1125 20:58:36.273696 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_5a2bec54-2f45-4aee-a3bf-774f63c4b64e/probe/0.log" Nov 25 20:58:36 crc kubenswrapper[4775]: I1125 20:58:36.304945 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_5a2bec54-2f45-4aee-a3bf-774f63c4b64e/manila-scheduler/0.log" Nov 25 20:58:36 crc kubenswrapper[4775]: I1125 20:58:36.362195 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_0a88473d-4ba5-4147-bf60-128f0b7ea8f6/manila-share/11.log" Nov 25 20:58:36 crc kubenswrapper[4775]: I1125 20:58:36.454346 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_0a88473d-4ba5-4147-bf60-128f0b7ea8f6/probe/0.log" Nov 25 20:58:36 crc kubenswrapper[4775]: I1125 20:58:36.500127 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_0a88473d-4ba5-4147-bf60-128f0b7ea8f6/manila-share/11.log" Nov 25 20:58:36 crc kubenswrapper[4775]: I1125 20:58:36.675734 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fd788f7d7-czrxl_9017fd2b-6435-43eb-8c16-85894d4713e9/neutron-api/0.log" Nov 25 20:58:36 crc kubenswrapper[4775]: I1125 20:58:36.727581 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fd788f7d7-czrxl_9017fd2b-6435-43eb-8c16-85894d4713e9/neutron-httpd/0.log" Nov 25 20:58:37 crc kubenswrapper[4775]: I1125 20:58:37.281891 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-v44nh_79243ac0-3276-49bd-a57d-f0c4f9458add/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:37 crc kubenswrapper[4775]: I1125 20:58:37.767425 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_362aa62d-5f99-4f83-9996-e564df5182e1/nova-api-log/0.log" Nov 25 20:58:37 crc kubenswrapper[4775]: I1125 20:58:37.870320 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5c11a62e-4c1d-4451-9350-4a1d9458de6e/nova-cell0-conductor-conductor/0.log" Nov 25 20:58:38 crc kubenswrapper[4775]: I1125 20:58:38.065792 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_362aa62d-5f99-4f83-9996-e564df5182e1/nova-api-api/0.log" Nov 25 20:58:38 crc kubenswrapper[4775]: I1125 20:58:38.070794 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_64e2c2e7-000b-4f8a-a064-209cd6036632/nova-cell1-conductor-conductor/0.log" Nov 25 20:58:38 crc kubenswrapper[4775]: I1125 20:58:38.206680 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3ec61d94-917a-4ddf-99c3-9a56d212ef64/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 20:58:38 crc kubenswrapper[4775]: I1125 20:58:38.318919 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4wlnp_44903c24-1252-485e-a390-3e79df1a521c/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:38 crc kubenswrapper[4775]: I1125 20:58:38.464823 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_678c6cb5-0c57-4a2f-8312-01c3230d1ff8/nova-metadata-log/0.log" Nov 25 20:58:38 crc kubenswrapper[4775]: I1125 20:58:38.787129 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_97e9f968-e12b-413d-a36b-7a2f16d0b1ec/mysql-bootstrap/0.log" Nov 25 20:58:38 crc kubenswrapper[4775]: I1125 20:58:38.793979 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9d3d4749-58d7-41df-acc0-27538415babd/nova-scheduler-scheduler/0.log" Nov 25 20:58:38 crc kubenswrapper[4775]: I1125 20:58:38.913412 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_97e9f968-e12b-413d-a36b-7a2f16d0b1ec/mysql-bootstrap/0.log" Nov 25 20:58:39 crc kubenswrapper[4775]: I1125 20:58:39.004555 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_97e9f968-e12b-413d-a36b-7a2f16d0b1ec/galera/0.log" Nov 25 20:58:39 crc kubenswrapper[4775]: I1125 20:58:39.100272 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1c8a9cba-f38d-45fb-8a7e-942f148611ab/mysql-bootstrap/0.log" Nov 25 20:58:39 crc kubenswrapper[4775]: I1125 20:58:39.315053 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1c8a9cba-f38d-45fb-8a7e-942f148611ab/mysql-bootstrap/0.log" Nov 25 20:58:39 crc kubenswrapper[4775]: I1125 20:58:39.317229 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1c8a9cba-f38d-45fb-8a7e-942f148611ab/galera/0.log" Nov 25 20:58:39 crc kubenswrapper[4775]: I1125 20:58:39.508554 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a36126d8-22b4-46b4-aa24-c02eba72023e/openstackclient/0.log" Nov 25 20:58:39 crc kubenswrapper[4775]: I1125 20:58:39.533225 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-k9862_462d24f9-e5cf-42b4-905e-13fa5f5716fe/ovn-controller/0.log" Nov 25 20:58:39 crc kubenswrapper[4775]: I1125 20:58:39.700884 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xz2l5_914e5d10-52cd-45c0-8da9-cd0fe095274c/openstack-network-exporter/0.log" Nov 25 20:58:39 crc kubenswrapper[4775]: I1125 20:58:39.904435 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ckpwc_c63e79d7-eea0-447e-b944-cd93ce3ebf55/ovsdb-server-init/0.log" Nov 25 20:58:40 crc kubenswrapper[4775]: I1125 20:58:40.059796 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ckpwc_c63e79d7-eea0-447e-b944-cd93ce3ebf55/ovsdb-server-init/0.log" Nov 25 20:58:40 crc kubenswrapper[4775]: I1125 20:58:40.099280 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ckpwc_c63e79d7-eea0-447e-b944-cd93ce3ebf55/ovs-vswitchd/0.log" Nov 25 20:58:40 crc kubenswrapper[4775]: I1125 20:58:40.160792 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ckpwc_c63e79d7-eea0-447e-b944-cd93ce3ebf55/ovsdb-server/0.log" Nov 25 20:58:40 crc kubenswrapper[4775]: I1125 20:58:40.320666 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-slq8g_0d5edebb-e2fd-4744-b994-2559c10c9947/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:40 crc kubenswrapper[4775]: I1125 20:58:40.468266 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0a7b999c-f778-4a60-9cad-b00875e7713b/openstack-network-exporter/0.log" Nov 25 20:58:40 crc kubenswrapper[4775]: I1125 20:58:40.519277 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_678c6cb5-0c57-4a2f-8312-01c3230d1ff8/nova-metadata-metadata/0.log" Nov 25 20:58:40 crc kubenswrapper[4775]: I1125 20:58:40.543847 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0a7b999c-f778-4a60-9cad-b00875e7713b/ovn-northd/0.log" Nov 25 20:58:40 crc kubenswrapper[4775]: I1125 20:58:40.653810 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2b0bc2f5-2fcc-432c-b9c9-508383732023/openstack-network-exporter/0.log" Nov 25 20:58:40 crc kubenswrapper[4775]: I1125 20:58:40.711479 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2b0bc2f5-2fcc-432c-b9c9-508383732023/ovsdbserver-nb/0.log" Nov 25 20:58:40 crc kubenswrapper[4775]: I1125 20:58:40.845524 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_cd854157-5d64-4744-9065-45b8d7e08c80/openstack-network-exporter/0.log" Nov 25 20:58:40 crc kubenswrapper[4775]: I1125 20:58:40.939580 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_cd854157-5d64-4744-9065-45b8d7e08c80/ovsdbserver-sb/0.log" Nov 25 20:58:41 crc kubenswrapper[4775]: I1125 20:58:41.036609 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-f9dd67654-p257f_db7604e6-c828-4d0f-9b60-9ae852dce0b7/placement-api/0.log" Nov 25 20:58:41 crc kubenswrapper[4775]: I1125 20:58:41.119585 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-f9dd67654-p257f_db7604e6-c828-4d0f-9b60-9ae852dce0b7/placement-log/0.log" Nov 25 20:58:41 crc kubenswrapper[4775]: I1125 20:58:41.187421 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_749c0c26-acc5-490a-9723-b45a341360bf/setup-container/0.log" Nov 25 20:58:41 crc kubenswrapper[4775]: I1125 20:58:41.363575 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_749c0c26-acc5-490a-9723-b45a341360bf/setup-container/0.log" Nov 25 20:58:41 crc kubenswrapper[4775]: I1125 20:58:41.381982 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_749c0c26-acc5-490a-9723-b45a341360bf/rabbitmq/0.log" Nov 25 20:58:41 crc kubenswrapper[4775]: I1125 20:58:41.411898 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_727ee9a0-e30b-4915-a405-c68a73d8a6e2/setup-container/0.log" Nov 25 20:58:41 crc kubenswrapper[4775]: I1125 20:58:41.581924 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_727ee9a0-e30b-4915-a405-c68a73d8a6e2/setup-container/0.log" Nov 25 20:58:41 crc kubenswrapper[4775]: I1125 20:58:41.595937 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_727ee9a0-e30b-4915-a405-c68a73d8a6e2/rabbitmq/0.log" Nov 25 20:58:41 crc kubenswrapper[4775]: I1125 20:58:41.703608 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vhhgz_87220c3d-2b9e-4ffb-bec9-df48355c9aac/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:41 crc kubenswrapper[4775]: I1125 20:58:41.815796 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fwvp8_9057fb85-d24d-4016-ac68-44e9e52440dd/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:41 crc kubenswrapper[4775]: I1125 20:58:41.959832 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5wvfr_b37d2556-fc78-4546-b531-ce2cebc6e8ec/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:42 crc kubenswrapper[4775]: I1125 20:58:42.046106 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jbgj4_dd8d944f-8e84-4b3c-a92f-d8a815571a85/ssh-known-hosts-edpm-deployment/0.log" Nov 25 20:58:42 crc kubenswrapper[4775]: I1125 20:58:42.207202 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mhqvr_db1b8608-e0b8-498f-94de-bff78ef4a19c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 20:58:44 crc kubenswrapper[4775]: I1125 20:58:44.847631 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 20:58:44 crc kubenswrapper[4775]: E1125 20:58:44.849409 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:58:45 crc kubenswrapper[4775]: I1125 20:58:45.847168 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:58:45 crc kubenswrapper[4775]: E1125 20:58:45.847728 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:58:46 crc kubenswrapper[4775]: I1125 20:58:46.846609 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 20:58:46 crc kubenswrapper[4775]: E1125 20:58:46.846877 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:58:47 crc kubenswrapper[4775]: I1125 20:58:47.323132 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_bdb87a80-e2c2-4c52-b2d2-9f4416324624/memcached/0.log" Nov 25 20:58:57 crc kubenswrapper[4775]: I1125 20:58:57.848430 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 20:58:57 crc kubenswrapper[4775]: E1125 20:58:57.849605 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:58:59 crc kubenswrapper[4775]: I1125 20:58:59.848016 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:58:59 crc kubenswrapper[4775]: E1125 20:58:59.848945 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:59:00 crc kubenswrapper[4775]: I1125 20:59:00.847425 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 20:59:00 crc kubenswrapper[4775]: E1125 20:59:00.847742 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:59:07 crc kubenswrapper[4775]: I1125 20:59:07.568881 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh_b2f991e3-e547-4228-89ed-7229d3bf188a/util/0.log" Nov 25 20:59:07 crc kubenswrapper[4775]: I1125 20:59:07.729663 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh_b2f991e3-e547-4228-89ed-7229d3bf188a/pull/0.log" Nov 25 20:59:07 crc kubenswrapper[4775]: I1125 20:59:07.740789 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh_b2f991e3-e547-4228-89ed-7229d3bf188a/util/0.log" Nov 25 20:59:07 crc kubenswrapper[4775]: I1125 20:59:07.788366 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh_b2f991e3-e547-4228-89ed-7229d3bf188a/pull/0.log" Nov 25 20:59:07 crc kubenswrapper[4775]: I1125 20:59:07.959643 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh_b2f991e3-e547-4228-89ed-7229d3bf188a/extract/0.log" Nov 25 20:59:07 crc kubenswrapper[4775]: I1125 20:59:07.969710 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh_b2f991e3-e547-4228-89ed-7229d3bf188a/pull/0.log" Nov 25 20:59:08 crc kubenswrapper[4775]: I1125 20:59:08.062080 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac9bb66603ce5108a8d2b226b2f7a5c85ad2232d15823ce400e3730134qwgmh_b2f991e3-e547-4228-89ed-7229d3bf188a/util/0.log" Nov 25 20:59:08 crc kubenswrapper[4775]: I1125 20:59:08.178061 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-w8459_a778d0b3-0440-4c61-8a61-59524e36835e/kube-rbac-proxy/0.log" Nov 25 20:59:08 crc kubenswrapper[4775]: I1125 20:59:08.223436 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-mh8t8_fb22768d-951e-4a69-bba6-8728e80e2935/kube-rbac-proxy/0.log" Nov 25 20:59:08 crc kubenswrapper[4775]: I1125 20:59:08.247068 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-w8459_a778d0b3-0440-4c61-8a61-59524e36835e/manager/0.log" Nov 25 20:59:08 crc kubenswrapper[4775]: I1125 20:59:08.411381 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-mh8t8_fb22768d-951e-4a69-bba6-8728e80e2935/manager/0.log" Nov 25 20:59:08 crc kubenswrapper[4775]: I1125 20:59:08.437612 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-vn52j_f1338e2e-e4e6-4c4b-a410-72e2d1acab0d/kube-rbac-proxy/0.log" Nov 25 20:59:08 crc kubenswrapper[4775]: I1125 20:59:08.461994 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-vn52j_f1338e2e-e4e6-4c4b-a410-72e2d1acab0d/manager/0.log" Nov 25 20:59:08 crc kubenswrapper[4775]: I1125 20:59:08.811747 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84dfd86bd6-8nk5f_ca39bca1-68fa-4d64-a929-1b3d013bb679/manager/0.log" Nov 25 20:59:08 crc kubenswrapper[4775]: I1125 20:59:08.912180 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84dfd86bd6-8nk5f_ca39bca1-68fa-4d64-a929-1b3d013bb679/kube-rbac-proxy/0.log" Nov 25 20:59:09 crc kubenswrapper[4775]: I1125 20:59:09.018259 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-knzcj_360afa93-07ee-47ad-beb7-cd45b9cc9bef/kube-rbac-proxy/0.log" Nov 25 20:59:09 crc kubenswrapper[4775]: I1125 20:59:09.072528 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-knzcj_360afa93-07ee-47ad-beb7-cd45b9cc9bef/manager/0.log" Nov 25 20:59:09 crc kubenswrapper[4775]: I1125 20:59:09.381715 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-892tw_08376459-180b-411f-9c74-c918980541f6/kube-rbac-proxy/0.log" Nov 25 20:59:09 crc kubenswrapper[4775]: I1125 20:59:09.420176 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-892tw_08376459-180b-411f-9c74-c918980541f6/manager/0.log" Nov 25 20:59:09 crc kubenswrapper[4775]: I1125 20:59:09.555162 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-tcv4j_256bc456-e90c-4c18-8531-9d0470473b55/kube-rbac-proxy/0.log" Nov 25 20:59:09 crc kubenswrapper[4775]: I1125 20:59:09.686117 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-86lv6_d8f444e1-3e73-4daa-a5f0-4fe2236a691b/kube-rbac-proxy/0.log" Nov 25 20:59:09 crc kubenswrapper[4775]: I1125 20:59:09.690965 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-tcv4j_256bc456-e90c-4c18-8531-9d0470473b55/manager/0.log" Nov 25 20:59:09 crc kubenswrapper[4775]: I1125 20:59:09.891580 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-hjdwf_6910455f-354f-4f91-8333-5cb54be87db6/kube-rbac-proxy/0.log" Nov 25 20:59:09 crc kubenswrapper[4775]: I1125 20:59:09.907318 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-86lv6_d8f444e1-3e73-4daa-a5f0-4fe2236a691b/manager/0.log" Nov 25 20:59:09 crc kubenswrapper[4775]: I1125 20:59:09.989382 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-hjdwf_6910455f-354f-4f91-8333-5cb54be87db6/manager/0.log" Nov 25 20:59:10 crc kubenswrapper[4775]: I1125 20:59:10.820317 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-sd2lc_e7a4f97f-5b6f-4347-b156-d96e1be21183/manager/0.log" Nov 25 20:59:10 crc kubenswrapper[4775]: I1125 20:59:10.837890 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-sd2lc_e7a4f97f-5b6f-4347-b156-d96e1be21183/kube-rbac-proxy/0.log" Nov 25 20:59:10 crc kubenswrapper[4775]: I1125 20:59:10.849035 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 20:59:10 crc kubenswrapper[4775]: E1125 20:59:10.849443 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:59:10 crc kubenswrapper[4775]: I1125 20:59:10.850278 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:59:10 crc kubenswrapper[4775]: E1125 20:59:10.850521 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 20:59:11 crc kubenswrapper[4775]: I1125 20:59:11.004754 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-sj9pg_88abb3bd-eb47-4185-a1a9-4f300ed99167/kube-rbac-proxy/0.log" Nov 25 20:59:11 crc kubenswrapper[4775]: I1125 20:59:11.020795 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-sj9pg_88abb3bd-eb47-4185-a1a9-4f300ed99167/manager/0.log" Nov 25 20:59:11 crc kubenswrapper[4775]: I1125 20:59:11.061439 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-5nm9r_9a436d5c-4f54-479c-846f-11e5d66d91fa/kube-rbac-proxy/0.log" Nov 25 20:59:11 crc kubenswrapper[4775]: I1125 20:59:11.145619 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-5nm9r_9a436d5c-4f54-479c-846f-11e5d66d91fa/manager/0.log" Nov 25 20:59:11 crc kubenswrapper[4775]: I1125 20:59:11.219149 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-rlx29_74ce2e86-cedf-4014-8d4c-8c126d58e7c9/kube-rbac-proxy/0.log" Nov 25 20:59:11 crc kubenswrapper[4775]: I1125 20:59:11.318436 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-qdknn_99a43674-e3dd-46c8-8fe7-b527112b3ff1/kube-rbac-proxy/0.log" Nov 25 20:59:11 crc kubenswrapper[4775]: I1125 20:59:11.357432 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-rlx29_74ce2e86-cedf-4014-8d4c-8c126d58e7c9/manager/0.log" Nov 25 20:59:11 crc kubenswrapper[4775]: I1125 20:59:11.455195 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-qdknn_99a43674-e3dd-46c8-8fe7-b527112b3ff1/manager/0.log" Nov 25 20:59:11 crc kubenswrapper[4775]: I1125 20:59:11.536793 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv_7b75a9f0-bd88-4e53-973a-0ce97e41cec8/kube-rbac-proxy/0.log" Nov 25 20:59:11 crc kubenswrapper[4775]: I1125 20:59:11.535154 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bbw9fv_7b75a9f0-bd88-4e53-973a-0ce97e41cec8/manager/0.log" Nov 25 20:59:11 crc kubenswrapper[4775]: I1125 20:59:11.867069 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-sbx5l_98ddeb43-7cad-4125-9945-9d152c7df25b/registry-server/0.log" Nov 25 20:59:11 crc kubenswrapper[4775]: I1125 20:59:11.947374 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-cd684d8f4-wqnhf_265ba024-3e37-4150-a70e-80cd60462c3c/operator/0.log" Nov 25 20:59:12 crc kubenswrapper[4775]: I1125 20:59:12.448540 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-p6c9k_d9838469-3633-4b7d-88dc-0a6fd8c272ce/kube-rbac-proxy/0.log" Nov 25 20:59:12 crc kubenswrapper[4775]: I1125 20:59:12.471781 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-p6c9k_d9838469-3633-4b7d-88dc-0a6fd8c272ce/manager/0.log" Nov 25 20:59:12 crc kubenswrapper[4775]: I1125 20:59:12.670001 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-snkf4_fdbde397-fc85-41aa-915f-3b8d77553adc/kube-rbac-proxy/0.log" Nov 25 20:59:12 crc kubenswrapper[4775]: I1125 20:59:12.674981 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-hg4df_592eda0a-f963-48bf-9902-3e52795051e3/operator/0.log" Nov 25 20:59:12 crc kubenswrapper[4775]: I1125 20:59:12.691521 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-snkf4_fdbde397-fc85-41aa-915f-3b8d77553adc/manager/0.log" Nov 25 20:59:12 crc kubenswrapper[4775]: I1125 20:59:12.856493 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-w2rwh_d01f394d-062f-4736-a7fa-abe501a5b2d9/manager/0.log" Nov 25 20:59:12 crc kubenswrapper[4775]: I1125 20:59:12.872752 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-w2rwh_d01f394d-062f-4736-a7fa-abe501a5b2d9/kube-rbac-proxy/0.log" Nov 25 20:59:13 crc kubenswrapper[4775]: I1125 20:59:13.088047 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-qnklp_8af71b48-ef6a-4e7f-8d32-e627f46a93ff/kube-rbac-proxy/0.log" Nov 25 20:59:13 crc kubenswrapper[4775]: I1125 20:59:13.118895 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-qnklp_8af71b48-ef6a-4e7f-8d32-e627f46a93ff/manager/0.log" Nov 25 20:59:13 crc kubenswrapper[4775]: I1125 20:59:13.194688 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7f84d9dcfc-rrwgl_b5f009d3-7b77-49c1-b5f1-b8219b31ed47/manager/0.log" Nov 25 20:59:13 crc kubenswrapper[4775]: I1125 20:59:13.201895 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-s5nz8_043fa652-c214-4428-877b-723905f53acb/kube-rbac-proxy/0.log" Nov 25 20:59:13 crc kubenswrapper[4775]: I1125 20:59:13.228452 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-s5nz8_043fa652-c214-4428-877b-723905f53acb/manager/0.log" Nov 25 20:59:13 crc kubenswrapper[4775]: I1125 20:59:13.334680 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-mhvjh_c368de49-6c69-4140-a8c2-21c7afc13031/kube-rbac-proxy/0.log" Nov 25 20:59:13 crc kubenswrapper[4775]: I1125 20:59:13.398064 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-mhvjh_c368de49-6c69-4140-a8c2-21c7afc13031/manager/0.log" Nov 25 20:59:15 crc kubenswrapper[4775]: I1125 20:59:15.847137 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 20:59:15 crc kubenswrapper[4775]: E1125 20:59:15.848071 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:59:21 crc kubenswrapper[4775]: I1125 20:59:21.848082 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 20:59:21 crc kubenswrapper[4775]: E1125 20:59:21.849159 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:59:25 crc kubenswrapper[4775]: I1125 20:59:25.846822 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 20:59:26 crc kubenswrapper[4775]: I1125 20:59:26.992496 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"85c970ea1d58b9057d326be7643e8f473a289dfa43dd2152f060ad069cfbdeb7"} Nov 25 20:59:26 crc kubenswrapper[4775]: I1125 20:59:26.994708 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 20:59:30 crc kubenswrapper[4775]: I1125 20:59:30.847365 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 20:59:30 crc kubenswrapper[4775]: E1125 20:59:30.848287 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:59:32 crc kubenswrapper[4775]: I1125 20:59:32.848905 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-g582m_57313bf3-1361-49f7-9a66-922b42ea36e7/control-plane-machine-set-operator/0.log" Nov 25 20:59:33 crc kubenswrapper[4775]: I1125 20:59:33.018934 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q5ml6_e865c9de-8fd2-4b09-854c-0426a35d3290/machine-api-operator/0.log" Nov 25 20:59:33 crc kubenswrapper[4775]: I1125 20:59:33.044153 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q5ml6_e865c9de-8fd2-4b09-854c-0426a35d3290/kube-rbac-proxy/0.log" Nov 25 20:59:35 crc kubenswrapper[4775]: I1125 20:59:35.847308 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 20:59:35 crc kubenswrapper[4775]: E1125 20:59:35.848164 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:59:43 crc kubenswrapper[4775]: I1125 20:59:43.206849 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:59:43 crc kubenswrapper[4775]: I1125 20:59:43.240143 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:59:43 crc kubenswrapper[4775]: I1125 20:59:43.848236 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 20:59:43 crc kubenswrapper[4775]: E1125 20:59:43.849013 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:59:46 crc kubenswrapper[4775]: I1125 20:59:46.850332 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 20:59:46 crc kubenswrapper[4775]: E1125 20:59:46.850903 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 20:59:47 crc kubenswrapper[4775]: I1125 20:59:47.740740 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-rl5x8_3ce20161-b6cf-4b36-83fe-61486d2e747f/cert-manager-controller/0.log" Nov 25 20:59:48 crc kubenswrapper[4775]: I1125 20:59:48.014106 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-ntqwn_5d9e5ddd-f0c5-4134-92bd-8e6e6022ed0d/cert-manager-cainjector/0.log" Nov 25 20:59:48 crc kubenswrapper[4775]: I1125 20:59:48.075913 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-vwq5q_63c18de1-e4dd-44f0-9b01-e8a3f3f6c238/cert-manager-webhook/0.log" Nov 25 20:59:53 crc kubenswrapper[4775]: I1125 20:59:53.132041 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:59:53 crc kubenswrapper[4775]: I1125 20:59:53.184239 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:59:56 crc kubenswrapper[4775]: I1125 20:59:56.847262 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 20:59:56 crc kubenswrapper[4775]: E1125 20:59:56.848201 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 20:59:58 crc kubenswrapper[4775]: I1125 20:59:58.853471 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 20:59:58 crc kubenswrapper[4775]: E1125 20:59:58.854829 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.135260 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nw99f"] Nov 25 21:00:00 crc kubenswrapper[4775]: E1125 21:00:00.135936 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52e7cb1-3615-4cf9-8ee2-ead56489b5fa" containerName="container-00" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.135948 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52e7cb1-3615-4cf9-8ee2-ead56489b5fa" containerName="container-00" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.136154 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c52e7cb1-3615-4cf9-8ee2-ead56489b5fa" containerName="container-00" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.137493 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.160172 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nw99f"] Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.186330 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae78140-5d75-4239-8a07-b7919cea5907-catalog-content\") pod \"redhat-marketplace-nw99f\" (UID: \"dae78140-5d75-4239-8a07-b7919cea5907\") " pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.186433 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srlft\" (UniqueName: \"kubernetes.io/projected/dae78140-5d75-4239-8a07-b7919cea5907-kube-api-access-srlft\") pod \"redhat-marketplace-nw99f\" (UID: \"dae78140-5d75-4239-8a07-b7919cea5907\") " pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.186532 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae78140-5d75-4239-8a07-b7919cea5907-utilities\") pod \"redhat-marketplace-nw99f\" (UID: \"dae78140-5d75-4239-8a07-b7919cea5907\") " pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.234290 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k"] Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.235902 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.237612 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.243792 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.244217 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k"] Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.288300 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npmvr\" (UniqueName: \"kubernetes.io/projected/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-kube-api-access-npmvr\") pod \"collect-profiles-29401740-q6x8k\" (UID: \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.288391 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-secret-volume\") pod \"collect-profiles-29401740-q6x8k\" (UID: \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.288582 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae78140-5d75-4239-8a07-b7919cea5907-catalog-content\") pod \"redhat-marketplace-nw99f\" (UID: \"dae78140-5d75-4239-8a07-b7919cea5907\") " pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.288893 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srlft\" (UniqueName: \"kubernetes.io/projected/dae78140-5d75-4239-8a07-b7919cea5907-kube-api-access-srlft\") pod \"redhat-marketplace-nw99f\" (UID: \"dae78140-5d75-4239-8a07-b7919cea5907\") " pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.288928 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae78140-5d75-4239-8a07-b7919cea5907-utilities\") pod \"redhat-marketplace-nw99f\" (UID: \"dae78140-5d75-4239-8a07-b7919cea5907\") " pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.289218 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae78140-5d75-4239-8a07-b7919cea5907-catalog-content\") pod \"redhat-marketplace-nw99f\" (UID: \"dae78140-5d75-4239-8a07-b7919cea5907\") " pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.289902 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-config-volume\") pod \"collect-profiles-29401740-q6x8k\" (UID: \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.290724 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae78140-5d75-4239-8a07-b7919cea5907-utilities\") pod \"redhat-marketplace-nw99f\" (UID: \"dae78140-5d75-4239-8a07-b7919cea5907\") " pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.310832 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srlft\" (UniqueName: \"kubernetes.io/projected/dae78140-5d75-4239-8a07-b7919cea5907-kube-api-access-srlft\") pod \"redhat-marketplace-nw99f\" (UID: \"dae78140-5d75-4239-8a07-b7919cea5907\") " pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.391669 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-config-volume\") pod \"collect-profiles-29401740-q6x8k\" (UID: \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.391785 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npmvr\" (UniqueName: \"kubernetes.io/projected/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-kube-api-access-npmvr\") pod \"collect-profiles-29401740-q6x8k\" (UID: \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.391844 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-secret-volume\") pod \"collect-profiles-29401740-q6x8k\" (UID: \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.393310 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-config-volume\") pod \"collect-profiles-29401740-q6x8k\" (UID: \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.394984 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-secret-volume\") pod \"collect-profiles-29401740-q6x8k\" (UID: \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.411101 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npmvr\" (UniqueName: \"kubernetes.io/projected/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-kube-api-access-npmvr\") pod \"collect-profiles-29401740-q6x8k\" (UID: \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.458658 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.552766 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" Nov 25 21:00:00 crc kubenswrapper[4775]: W1125 21:00:00.944905 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddae78140_5d75_4239_8a07_b7919cea5907.slice/crio-bb724f539e887f028ba6196384ea50f1c6390d46fb2e8aa067c5d59fe6ae9e18 WatchSource:0}: Error finding container bb724f539e887f028ba6196384ea50f1c6390d46fb2e8aa067c5d59fe6ae9e18: Status 404 returned error can't find the container with id bb724f539e887f028ba6196384ea50f1c6390d46fb2e8aa067c5d59fe6ae9e18 Nov 25 21:00:00 crc kubenswrapper[4775]: I1125 21:00:00.944921 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nw99f"] Nov 25 21:00:01 crc kubenswrapper[4775]: I1125 21:00:01.072552 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k"] Nov 25 21:00:01 crc kubenswrapper[4775]: W1125 21:00:01.077131 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccbe6dfa_b2b5_4415_aff9_e9884a80906c.slice/crio-0ad8d0cf9948580dcecb4938a81e2a37d3795ca24319ee06e8de36aaf6af952f WatchSource:0}: Error finding container 0ad8d0cf9948580dcecb4938a81e2a37d3795ca24319ee06e8de36aaf6af952f: Status 404 returned error can't find the container with id 0ad8d0cf9948580dcecb4938a81e2a37d3795ca24319ee06e8de36aaf6af952f Nov 25 21:00:01 crc kubenswrapper[4775]: I1125 21:00:01.326609 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" event={"ID":"ccbe6dfa-b2b5-4415-aff9-e9884a80906c","Type":"ContainerStarted","Data":"9034524d9c28f9bc433d239b01dd1b967cc94d3c4eacb4629e331dca758bfd2e"} Nov 25 21:00:01 crc kubenswrapper[4775]: I1125 21:00:01.326981 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" event={"ID":"ccbe6dfa-b2b5-4415-aff9-e9884a80906c","Type":"ContainerStarted","Data":"0ad8d0cf9948580dcecb4938a81e2a37d3795ca24319ee06e8de36aaf6af952f"} Nov 25 21:00:01 crc kubenswrapper[4775]: I1125 21:00:01.328317 4775 generic.go:334] "Generic (PLEG): container finished" podID="dae78140-5d75-4239-8a07-b7919cea5907" containerID="744b99d516929768a01309242122189334a363abb80ac97e722dc5b7c98cf48d" exitCode=0 Nov 25 21:00:01 crc kubenswrapper[4775]: I1125 21:00:01.328386 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw99f" event={"ID":"dae78140-5d75-4239-8a07-b7919cea5907","Type":"ContainerDied","Data":"744b99d516929768a01309242122189334a363abb80ac97e722dc5b7c98cf48d"} Nov 25 21:00:01 crc kubenswrapper[4775]: I1125 21:00:01.328419 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw99f" event={"ID":"dae78140-5d75-4239-8a07-b7919cea5907","Type":"ContainerStarted","Data":"bb724f539e887f028ba6196384ea50f1c6390d46fb2e8aa067c5d59fe6ae9e18"} Nov 25 21:00:01 crc kubenswrapper[4775]: I1125 21:00:01.330212 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 21:00:01 crc kubenswrapper[4775]: I1125 21:00:01.344399 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" podStartSLOduration=1.344382902 podStartE2EDuration="1.344382902s" podCreationTimestamp="2025-11-25 21:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 21:00:01.342195072 +0000 UTC m=+5183.258557448" watchObservedRunningTime="2025-11-25 21:00:01.344382902 +0000 UTC m=+5183.260745268" Nov 25 21:00:02 crc kubenswrapper[4775]: I1125 21:00:02.203551 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 21:00:02 crc kubenswrapper[4775]: I1125 21:00:02.204763 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 21:00:02 crc kubenswrapper[4775]: I1125 21:00:02.204801 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 21:00:02 crc kubenswrapper[4775]: I1125 21:00:02.205420 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"85c970ea1d58b9057d326be7643e8f473a289dfa43dd2152f060ad069cfbdeb7"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 21:00:02 crc kubenswrapper[4775]: I1125 21:00:02.205454 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://85c970ea1d58b9057d326be7643e8f473a289dfa43dd2152f060ad069cfbdeb7" gracePeriod=30 Nov 25 21:00:02 crc kubenswrapper[4775]: I1125 21:00:02.209112 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 21:00:02 crc kubenswrapper[4775]: I1125 21:00:02.339308 4775 generic.go:334] "Generic (PLEG): container finished" podID="ccbe6dfa-b2b5-4415-aff9-e9884a80906c" containerID="9034524d9c28f9bc433d239b01dd1b967cc94d3c4eacb4629e331dca758bfd2e" exitCode=0 Nov 25 21:00:02 crc kubenswrapper[4775]: I1125 21:00:02.339357 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" event={"ID":"ccbe6dfa-b2b5-4415-aff9-e9884a80906c","Type":"ContainerDied","Data":"9034524d9c28f9bc433d239b01dd1b967cc94d3c4eacb4629e331dca758bfd2e"} Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.223373 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-bz2l6_b486f12c-2fa3-4826-a246-2f805253df99/nmstate-console-plugin/0.log" Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.405274 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-glq7r_c972f926-912d-49e8-8533-10045e2263da/nmstate-handler/0.log" Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.448480 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-zht2b_7c6692fe-ee5b-431f-ab39-cb684d304bc1/kube-rbac-proxy/0.log" Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.470524 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-zht2b_7c6692fe-ee5b-431f-ab39-cb684d304bc1/nmstate-metrics/0.log" Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.692070 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.736415 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-d6clh_ba33ec42-80cf-4999-9cb7-19b3aed25d86/nmstate-operator/0.log" Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.757174 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-config-volume\") pod \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\" (UID: \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\") " Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.757396 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-secret-volume\") pod \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\" (UID: \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\") " Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.757441 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npmvr\" (UniqueName: \"kubernetes.io/projected/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-kube-api-access-npmvr\") pod \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\" (UID: \"ccbe6dfa-b2b5-4415-aff9-e9884a80906c\") " Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.757943 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-config-volume" (OuterVolumeSpecName: "config-volume") pod "ccbe6dfa-b2b5-4415-aff9-e9884a80906c" (UID: "ccbe6dfa-b2b5-4415-aff9-e9884a80906c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.762884 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-t5zws_b5dfbfdb-3ed8-442b-82c4-4cb389e18670/nmstate-webhook/0.log" Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.765775 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ccbe6dfa-b2b5-4415-aff9-e9884a80906c" (UID: "ccbe6dfa-b2b5-4415-aff9-e9884a80906c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.765850 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-kube-api-access-npmvr" (OuterVolumeSpecName: "kube-api-access-npmvr") pod "ccbe6dfa-b2b5-4415-aff9-e9884a80906c" (UID: "ccbe6dfa-b2b5-4415-aff9-e9884a80906c"). InnerVolumeSpecName "kube-api-access-npmvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.860102 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.860142 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:03 crc kubenswrapper[4775]: I1125 21:00:03.860155 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npmvr\" (UniqueName: \"kubernetes.io/projected/ccbe6dfa-b2b5-4415-aff9-e9884a80906c-kube-api-access-npmvr\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:04 crc kubenswrapper[4775]: I1125 21:00:04.358282 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" event={"ID":"ccbe6dfa-b2b5-4415-aff9-e9884a80906c","Type":"ContainerDied","Data":"0ad8d0cf9948580dcecb4938a81e2a37d3795ca24319ee06e8de36aaf6af952f"} Nov 25 21:00:04 crc kubenswrapper[4775]: I1125 21:00:04.358645 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ad8d0cf9948580dcecb4938a81e2a37d3795ca24319ee06e8de36aaf6af952f" Nov 25 21:00:04 crc kubenswrapper[4775]: I1125 21:00:04.358376 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-q6x8k" Nov 25 21:00:04 crc kubenswrapper[4775]: I1125 21:00:04.430280 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq"] Nov 25 21:00:04 crc kubenswrapper[4775]: I1125 21:00:04.438447 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401695-tzsgq"] Nov 25 21:00:04 crc kubenswrapper[4775]: I1125 21:00:04.858431 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2212fe2-02c7-4803-a221-c05221f0317f" path="/var/lib/kubelet/pods/d2212fe2-02c7-4803-a221-c05221f0317f/volumes" Nov 25 21:00:06 crc kubenswrapper[4775]: I1125 21:00:06.379945 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="85c970ea1d58b9057d326be7643e8f473a289dfa43dd2152f060ad069cfbdeb7" exitCode=0 Nov 25 21:00:06 crc kubenswrapper[4775]: I1125 21:00:06.380015 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"85c970ea1d58b9057d326be7643e8f473a289dfa43dd2152f060ad069cfbdeb7"} Nov 25 21:00:06 crc kubenswrapper[4775]: I1125 21:00:06.380596 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9"} Nov 25 21:00:06 crc kubenswrapper[4775]: I1125 21:00:06.380627 4775 scope.go:117] "RemoveContainer" containerID="0c7a251420cc2c6e8563dacb06ead496ec7e0a8a92d2abb1d8b0d2a9134026c8" Nov 25 21:00:06 crc kubenswrapper[4775]: I1125 21:00:06.381135 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 21:00:11 crc kubenswrapper[4775]: I1125 21:00:11.847363 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 21:00:11 crc kubenswrapper[4775]: E1125 21:00:11.848099 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 21:00:12 crc kubenswrapper[4775]: I1125 21:00:12.848690 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:00:12 crc kubenswrapper[4775]: E1125 21:00:12.848909 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:00:14 crc kubenswrapper[4775]: I1125 21:00:14.499290 4775 generic.go:334] "Generic (PLEG): container finished" podID="dae78140-5d75-4239-8a07-b7919cea5907" containerID="4a8fc4572b0239e7f9256d360fdcef8a7c0ab1364cf70b9c84bd94b2a8a83f5e" exitCode=0 Nov 25 21:00:14 crc kubenswrapper[4775]: I1125 21:00:14.499399 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw99f" event={"ID":"dae78140-5d75-4239-8a07-b7919cea5907","Type":"ContainerDied","Data":"4a8fc4572b0239e7f9256d360fdcef8a7c0ab1364cf70b9c84bd94b2a8a83f5e"} Nov 25 21:00:15 crc kubenswrapper[4775]: I1125 21:00:15.520495 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw99f" event={"ID":"dae78140-5d75-4239-8a07-b7919cea5907","Type":"ContainerStarted","Data":"b07415305e2f4fc51cfacec4ac56f5bb42bf6c269523247b60c8b9d927f9ffb2"} Nov 25 21:00:15 crc kubenswrapper[4775]: I1125 21:00:15.557217 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nw99f" podStartSLOduration=1.925169265 podStartE2EDuration="15.557191498s" podCreationTimestamp="2025-11-25 21:00:00 +0000 UTC" firstStartedPulling="2025-11-25 21:00:01.329989693 +0000 UTC m=+5183.246352059" lastFinishedPulling="2025-11-25 21:00:14.962011926 +0000 UTC m=+5196.878374292" observedRunningTime="2025-11-25 21:00:15.551173216 +0000 UTC m=+5197.467535592" watchObservedRunningTime="2025-11-25 21:00:15.557191498 +0000 UTC m=+5197.473553864" Nov 25 21:00:20 crc kubenswrapper[4775]: I1125 21:00:20.459615 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:20 crc kubenswrapper[4775]: I1125 21:00:20.460134 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:20 crc kubenswrapper[4775]: I1125 21:00:20.526718 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:20 crc kubenswrapper[4775]: I1125 21:00:20.612597 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:20 crc kubenswrapper[4775]: I1125 21:00:20.764529 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nw99f"] Nov 25 21:00:21 crc kubenswrapper[4775]: I1125 21:00:21.200175 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-dddw7_52e5e476-3337-4715-9e67-b7230874d2d4/kube-rbac-proxy/0.log" Nov 25 21:00:21 crc kubenswrapper[4775]: I1125 21:00:21.352663 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-dddw7_52e5e476-3337-4715-9e67-b7230874d2d4/controller/0.log" Nov 25 21:00:21 crc kubenswrapper[4775]: I1125 21:00:21.395745 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/cp-frr-files/0.log" Nov 25 21:00:21 crc kubenswrapper[4775]: I1125 21:00:21.605307 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/cp-reloader/0.log" Nov 25 21:00:21 crc kubenswrapper[4775]: I1125 21:00:21.697676 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/cp-metrics/0.log" Nov 25 21:00:21 crc kubenswrapper[4775]: I1125 21:00:21.702694 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/cp-frr-files/0.log" Nov 25 21:00:21 crc kubenswrapper[4775]: I1125 21:00:21.745792 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/cp-reloader/0.log" Nov 25 21:00:21 crc kubenswrapper[4775]: I1125 21:00:21.880248 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/cp-frr-files/0.log" Nov 25 21:00:21 crc kubenswrapper[4775]: I1125 21:00:21.894285 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/cp-reloader/0.log" Nov 25 21:00:21 crc kubenswrapper[4775]: I1125 21:00:21.901052 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/cp-metrics/0.log" Nov 25 21:00:21 crc kubenswrapper[4775]: I1125 21:00:21.929970 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/cp-metrics/0.log" Nov 25 21:00:22 crc kubenswrapper[4775]: I1125 21:00:22.233558 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/cp-frr-files/0.log" Nov 25 21:00:22 crc kubenswrapper[4775]: I1125 21:00:22.298837 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/cp-reloader/0.log" Nov 25 21:00:22 crc kubenswrapper[4775]: I1125 21:00:22.307824 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/cp-metrics/0.log" Nov 25 21:00:22 crc kubenswrapper[4775]: I1125 21:00:22.317189 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/controller/0.log" Nov 25 21:00:22 crc kubenswrapper[4775]: I1125 21:00:22.441504 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/frr-metrics/0.log" Nov 25 21:00:22 crc kubenswrapper[4775]: I1125 21:00:22.465492 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/kube-rbac-proxy/0.log" Nov 25 21:00:22 crc kubenswrapper[4775]: I1125 21:00:22.545925 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/kube-rbac-proxy-frr/0.log" Nov 25 21:00:22 crc kubenswrapper[4775]: I1125 21:00:22.579863 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nw99f" podUID="dae78140-5d75-4239-8a07-b7919cea5907" containerName="registry-server" containerID="cri-o://b07415305e2f4fc51cfacec4ac56f5bb42bf6c269523247b60c8b9d927f9ffb2" gracePeriod=2 Nov 25 21:00:22 crc kubenswrapper[4775]: I1125 21:00:22.640935 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/reloader/0.log" Nov 25 21:00:22 crc kubenswrapper[4775]: I1125 21:00:22.894537 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-ngmrb_9c2fb5c5-5143-45f7-bcef-e6374fb45624/frr-k8s-webhook-server/0.log" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.077296 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.100441 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64cc678d47-lk7dp_8f53c019-29df-4614-a285-cc2b88dba2ba/manager/0.log" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.196318 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae78140-5d75-4239-8a07-b7919cea5907-utilities\") pod \"dae78140-5d75-4239-8a07-b7919cea5907\" (UID: \"dae78140-5d75-4239-8a07-b7919cea5907\") " Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.196373 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srlft\" (UniqueName: \"kubernetes.io/projected/dae78140-5d75-4239-8a07-b7919cea5907-kube-api-access-srlft\") pod \"dae78140-5d75-4239-8a07-b7919cea5907\" (UID: \"dae78140-5d75-4239-8a07-b7919cea5907\") " Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.196410 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae78140-5d75-4239-8a07-b7919cea5907-catalog-content\") pod \"dae78140-5d75-4239-8a07-b7919cea5907\" (UID: \"dae78140-5d75-4239-8a07-b7919cea5907\") " Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.197669 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae78140-5d75-4239-8a07-b7919cea5907-utilities" (OuterVolumeSpecName: "utilities") pod "dae78140-5d75-4239-8a07-b7919cea5907" (UID: "dae78140-5d75-4239-8a07-b7919cea5907"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.201890 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae78140-5d75-4239-8a07-b7919cea5907-kube-api-access-srlft" (OuterVolumeSpecName: "kube-api-access-srlft") pod "dae78140-5d75-4239-8a07-b7919cea5907" (UID: "dae78140-5d75-4239-8a07-b7919cea5907"). InnerVolumeSpecName "kube-api-access-srlft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.267023 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-747fc6cfc5-9qp9q_1efb9150-f88c-4d86-a034-e49f9576f96a/webhook-server/0.log" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.268211 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae78140-5d75-4239-8a07-b7919cea5907-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dae78140-5d75-4239-8a07-b7919cea5907" (UID: "dae78140-5d75-4239-8a07-b7919cea5907"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.298250 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae78140-5d75-4239-8a07-b7919cea5907-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.298280 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srlft\" (UniqueName: \"kubernetes.io/projected/dae78140-5d75-4239-8a07-b7919cea5907-kube-api-access-srlft\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.298292 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae78140-5d75-4239-8a07-b7919cea5907-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.484230 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.508793 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7p6bq_e289e852-bef3-4376-8a15-b94339b1a3a3/kube-rbac-proxy/0.log" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.540762 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.589434 4775 generic.go:334] "Generic (PLEG): container finished" podID="dae78140-5d75-4239-8a07-b7919cea5907" containerID="b07415305e2f4fc51cfacec4ac56f5bb42bf6c269523247b60c8b9d927f9ffb2" exitCode=0 Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.589492 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw99f" event={"ID":"dae78140-5d75-4239-8a07-b7919cea5907","Type":"ContainerDied","Data":"b07415305e2f4fc51cfacec4ac56f5bb42bf6c269523247b60c8b9d927f9ffb2"} Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.589520 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw99f" event={"ID":"dae78140-5d75-4239-8a07-b7919cea5907","Type":"ContainerDied","Data":"bb724f539e887f028ba6196384ea50f1c6390d46fb2e8aa067c5d59fe6ae9e18"} Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.589537 4775 scope.go:117] "RemoveContainer" containerID="b07415305e2f4fc51cfacec4ac56f5bb42bf6c269523247b60c8b9d927f9ffb2" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.589738 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nw99f" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.612224 4775 scope.go:117] "RemoveContainer" containerID="4a8fc4572b0239e7f9256d360fdcef8a7c0ab1364cf70b9c84bd94b2a8a83f5e" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.641038 4775 scope.go:117] "RemoveContainer" containerID="744b99d516929768a01309242122189334a363abb80ac97e722dc5b7c98cf48d" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.666709 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nw99f"] Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.676733 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nw99f"] Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.685843 4775 scope.go:117] "RemoveContainer" containerID="b07415305e2f4fc51cfacec4ac56f5bb42bf6c269523247b60c8b9d927f9ffb2" Nov 25 21:00:23 crc kubenswrapper[4775]: E1125 21:00:23.693494 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b07415305e2f4fc51cfacec4ac56f5bb42bf6c269523247b60c8b9d927f9ffb2\": container with ID starting with b07415305e2f4fc51cfacec4ac56f5bb42bf6c269523247b60c8b9d927f9ffb2 not found: ID does not exist" containerID="b07415305e2f4fc51cfacec4ac56f5bb42bf6c269523247b60c8b9d927f9ffb2" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.693545 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b07415305e2f4fc51cfacec4ac56f5bb42bf6c269523247b60c8b9d927f9ffb2"} err="failed to get container status \"b07415305e2f4fc51cfacec4ac56f5bb42bf6c269523247b60c8b9d927f9ffb2\": rpc error: code = NotFound desc = could not find container \"b07415305e2f4fc51cfacec4ac56f5bb42bf6c269523247b60c8b9d927f9ffb2\": container with ID starting with b07415305e2f4fc51cfacec4ac56f5bb42bf6c269523247b60c8b9d927f9ffb2 not found: ID does not exist" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.693572 4775 scope.go:117] "RemoveContainer" containerID="4a8fc4572b0239e7f9256d360fdcef8a7c0ab1364cf70b9c84bd94b2a8a83f5e" Nov 25 21:00:23 crc kubenswrapper[4775]: E1125 21:00:23.695048 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a8fc4572b0239e7f9256d360fdcef8a7c0ab1364cf70b9c84bd94b2a8a83f5e\": container with ID starting with 4a8fc4572b0239e7f9256d360fdcef8a7c0ab1364cf70b9c84bd94b2a8a83f5e not found: ID does not exist" containerID="4a8fc4572b0239e7f9256d360fdcef8a7c0ab1364cf70b9c84bd94b2a8a83f5e" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.695091 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8fc4572b0239e7f9256d360fdcef8a7c0ab1364cf70b9c84bd94b2a8a83f5e"} err="failed to get container status \"4a8fc4572b0239e7f9256d360fdcef8a7c0ab1364cf70b9c84bd94b2a8a83f5e\": rpc error: code = NotFound desc = could not find container \"4a8fc4572b0239e7f9256d360fdcef8a7c0ab1364cf70b9c84bd94b2a8a83f5e\": container with ID starting with 4a8fc4572b0239e7f9256d360fdcef8a7c0ab1364cf70b9c84bd94b2a8a83f5e not found: ID does not exist" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.695127 4775 scope.go:117] "RemoveContainer" containerID="744b99d516929768a01309242122189334a363abb80ac97e722dc5b7c98cf48d" Nov 25 21:00:23 crc kubenswrapper[4775]: E1125 21:00:23.698187 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"744b99d516929768a01309242122189334a363abb80ac97e722dc5b7c98cf48d\": container with ID starting with 744b99d516929768a01309242122189334a363abb80ac97e722dc5b7c98cf48d not found: ID does not exist" containerID="744b99d516929768a01309242122189334a363abb80ac97e722dc5b7c98cf48d" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.698231 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"744b99d516929768a01309242122189334a363abb80ac97e722dc5b7c98cf48d"} err="failed to get container status \"744b99d516929768a01309242122189334a363abb80ac97e722dc5b7c98cf48d\": rpc error: code = NotFound desc = could not find container \"744b99d516929768a01309242122189334a363abb80ac97e722dc5b7c98cf48d\": container with ID starting with 744b99d516929768a01309242122189334a363abb80ac97e722dc5b7c98cf48d not found: ID does not exist" Nov 25 21:00:23 crc kubenswrapper[4775]: I1125 21:00:23.979894 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7p6bq_e289e852-bef3-4376-8a15-b94339b1a3a3/speaker/0.log" Nov 25 21:00:24 crc kubenswrapper[4775]: I1125 21:00:24.149638 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b9hkl_0d8756c7-051e-4ab6-bd7b-32a5f2646497/frr/0.log" Nov 25 21:00:24 crc kubenswrapper[4775]: I1125 21:00:24.867832 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae78140-5d75-4239-8a07-b7919cea5907" path="/var/lib/kubelet/pods/dae78140-5d75-4239-8a07-b7919cea5907/volumes" Nov 25 21:00:26 crc kubenswrapper[4775]: I1125 21:00:26.846865 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 21:00:26 crc kubenswrapper[4775]: E1125 21:00:26.847294 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 21:00:27 crc kubenswrapper[4775]: I1125 21:00:27.847790 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:00:27 crc kubenswrapper[4775]: E1125 21:00:27.848123 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:00:33 crc kubenswrapper[4775]: I1125 21:00:33.160704 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 21:00:33 crc kubenswrapper[4775]: I1125 21:00:33.197004 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 21:00:39 crc kubenswrapper[4775]: I1125 21:00:39.236362 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_a8f57ee2-05a5-41e8-8d84-9f0b746ef461/util/0.log" Nov 25 21:00:39 crc kubenswrapper[4775]: I1125 21:00:39.432291 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_a8f57ee2-05a5-41e8-8d84-9f0b746ef461/pull/0.log" Nov 25 21:00:39 crc kubenswrapper[4775]: I1125 21:00:39.446542 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_a8f57ee2-05a5-41e8-8d84-9f0b746ef461/util/0.log" Nov 25 21:00:39 crc kubenswrapper[4775]: I1125 21:00:39.462776 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_a8f57ee2-05a5-41e8-8d84-9f0b746ef461/pull/0.log" Nov 25 21:00:39 crc kubenswrapper[4775]: I1125 21:00:39.622142 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_a8f57ee2-05a5-41e8-8d84-9f0b746ef461/util/0.log" Nov 25 21:00:39 crc kubenswrapper[4775]: I1125 21:00:39.650397 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_a8f57ee2-05a5-41e8-8d84-9f0b746ef461/extract/0.log" Nov 25 21:00:39 crc kubenswrapper[4775]: I1125 21:00:39.692900 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ep89l2_a8f57ee2-05a5-41e8-8d84-9f0b746ef461/pull/0.log" Nov 25 21:00:39 crc kubenswrapper[4775]: I1125 21:00:39.813122 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjtj6_058435d6-8288-4143-9685-582d6c98e51e/extract-utilities/0.log" Nov 25 21:00:39 crc kubenswrapper[4775]: I1125 21:00:39.967734 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjtj6_058435d6-8288-4143-9685-582d6c98e51e/extract-content/0.log" Nov 25 21:00:40 crc kubenswrapper[4775]: I1125 21:00:40.007244 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjtj6_058435d6-8288-4143-9685-582d6c98e51e/extract-content/0.log" Nov 25 21:00:40 crc kubenswrapper[4775]: I1125 21:00:40.034860 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjtj6_058435d6-8288-4143-9685-582d6c98e51e/extract-utilities/0.log" Nov 25 21:00:40 crc kubenswrapper[4775]: I1125 21:00:40.166617 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjtj6_058435d6-8288-4143-9685-582d6c98e51e/extract-utilities/0.log" Nov 25 21:00:40 crc kubenswrapper[4775]: I1125 21:00:40.167604 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjtj6_058435d6-8288-4143-9685-582d6c98e51e/extract-content/0.log" Nov 25 21:00:40 crc kubenswrapper[4775]: I1125 21:00:40.377353 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjtj6_058435d6-8288-4143-9685-582d6c98e51e/registry-server/0.log" Nov 25 21:00:40 crc kubenswrapper[4775]: I1125 21:00:40.416338 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4mhqv_65bb486f-e7f0-4b80-b8bb-f46971b2fc53/extract-utilities/0.log" Nov 25 21:00:40 crc kubenswrapper[4775]: I1125 21:00:40.558758 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4mhqv_65bb486f-e7f0-4b80-b8bb-f46971b2fc53/extract-utilities/0.log" Nov 25 21:00:40 crc kubenswrapper[4775]: I1125 21:00:40.594063 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4mhqv_65bb486f-e7f0-4b80-b8bb-f46971b2fc53/extract-content/0.log" Nov 25 21:00:40 crc kubenswrapper[4775]: I1125 21:00:40.617700 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4mhqv_65bb486f-e7f0-4b80-b8bb-f46971b2fc53/extract-content/0.log" Nov 25 21:00:40 crc kubenswrapper[4775]: I1125 21:00:40.804318 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4mhqv_65bb486f-e7f0-4b80-b8bb-f46971b2fc53/extract-content/0.log" Nov 25 21:00:40 crc kubenswrapper[4775]: I1125 21:00:40.816756 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4mhqv_65bb486f-e7f0-4b80-b8bb-f46971b2fc53/extract-utilities/0.log" Nov 25 21:00:40 crc kubenswrapper[4775]: I1125 21:00:40.847038 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 21:00:40 crc kubenswrapper[4775]: E1125 21:00:40.847314 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 21:00:40 crc kubenswrapper[4775]: I1125 21:00:40.847595 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:00:40 crc kubenswrapper[4775]: E1125 21:00:40.847953 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:00:41 crc kubenswrapper[4775]: I1125 21:00:41.041346 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m_da7c7d85-6da9-4aa0-9b6b-89aca8bef945/util/0.log" Nov 25 21:00:41 crc kubenswrapper[4775]: I1125 21:00:41.264963 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4mhqv_65bb486f-e7f0-4b80-b8bb-f46971b2fc53/registry-server/0.log" Nov 25 21:00:41 crc kubenswrapper[4775]: I1125 21:00:41.283447 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m_da7c7d85-6da9-4aa0-9b6b-89aca8bef945/util/0.log" Nov 25 21:00:41 crc kubenswrapper[4775]: I1125 21:00:41.312456 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m_da7c7d85-6da9-4aa0-9b6b-89aca8bef945/pull/0.log" Nov 25 21:00:41 crc kubenswrapper[4775]: I1125 21:00:41.334452 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m_da7c7d85-6da9-4aa0-9b6b-89aca8bef945/pull/0.log" Nov 25 21:00:41 crc kubenswrapper[4775]: I1125 21:00:41.520864 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m_da7c7d85-6da9-4aa0-9b6b-89aca8bef945/extract/0.log" Nov 25 21:00:41 crc kubenswrapper[4775]: I1125 21:00:41.562599 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m_da7c7d85-6da9-4aa0-9b6b-89aca8bef945/util/0.log" Nov 25 21:00:41 crc kubenswrapper[4775]: I1125 21:00:41.569331 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wz58m_da7c7d85-6da9-4aa0-9b6b-89aca8bef945/pull/0.log" Nov 25 21:00:41 crc kubenswrapper[4775]: I1125 21:00:41.786041 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4vkgv_a9644641-2767-4104-b381-c7a264debd71/marketplace-operator/0.log" Nov 25 21:00:41 crc kubenswrapper[4775]: I1125 21:00:41.813201 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f92v6_0a58bfa8-f307-48e3-be90-e1ee238efe08/extract-utilities/0.log" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.046078 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f92v6_0a58bfa8-f307-48e3-be90-e1ee238efe08/extract-utilities/0.log" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.056065 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f92v6_0a58bfa8-f307-48e3-be90-e1ee238efe08/extract-content/0.log" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.057501 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f92v6_0a58bfa8-f307-48e3-be90-e1ee238efe08/extract-content/0.log" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.212790 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.212899 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-api-0" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.213875 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.213941 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-api" containerStatusID={"Type":"cri-o","ID":"de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9"} pod="openstack/manila-api-0" containerMessage="Container manila-api failed liveness probe, will be restarted" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.213980 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" containerID="cri-o://de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" gracePeriod=30 Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.219050 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="Get \"https://10.217.0.245:8786/healthcheck\": EOF" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.222995 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f92v6_0a58bfa8-f307-48e3-be90-e1ee238efe08/extract-utilities/0.log" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.234221 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gvqrf_77c7cb6e-134b-4de7-8d55-21a7e73705e2/extract-utilities/0.log" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.243265 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f92v6_0a58bfa8-f307-48e3-be90-e1ee238efe08/extract-content/0.log" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.434128 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f92v6_0a58bfa8-f307-48e3-be90-e1ee238efe08/registry-server/0.log" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.468969 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gvqrf_77c7cb6e-134b-4de7-8d55-21a7e73705e2/extract-content/0.log" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.497142 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gvqrf_77c7cb6e-134b-4de7-8d55-21a7e73705e2/extract-utilities/0.log" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.509369 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gvqrf_77c7cb6e-134b-4de7-8d55-21a7e73705e2/extract-content/0.log" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.654187 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gvqrf_77c7cb6e-134b-4de7-8d55-21a7e73705e2/extract-utilities/0.log" Nov 25 21:00:42 crc kubenswrapper[4775]: I1125 21:00:42.654907 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gvqrf_77c7cb6e-134b-4de7-8d55-21a7e73705e2/extract-content/0.log" Nov 25 21:00:43 crc kubenswrapper[4775]: I1125 21:00:43.312856 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gvqrf_77c7cb6e-134b-4de7-8d55-21a7e73705e2/registry-server/0.log" Nov 25 21:00:45 crc kubenswrapper[4775]: E1125 21:00:45.860555 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:00:46 crc kubenswrapper[4775]: I1125 21:00:46.815622 4775 generic.go:334] "Generic (PLEG): container finished" podID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" exitCode=0 Nov 25 21:00:46 crc kubenswrapper[4775]: I1125 21:00:46.815706 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerDied","Data":"de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9"} Nov 25 21:00:46 crc kubenswrapper[4775]: I1125 21:00:46.815773 4775 scope.go:117] "RemoveContainer" containerID="85c970ea1d58b9057d326be7643e8f473a289dfa43dd2152f060ad069cfbdeb7" Nov 25 21:00:46 crc kubenswrapper[4775]: I1125 21:00:46.816567 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:00:46 crc kubenswrapper[4775]: E1125 21:00:46.816875 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:00:52 crc kubenswrapper[4775]: I1125 21:00:52.848448 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:00:52 crc kubenswrapper[4775]: E1125 21:00:52.849821 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:00:53 crc kubenswrapper[4775]: I1125 21:00:53.847865 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 21:00:53 crc kubenswrapper[4775]: E1125 21:00:53.848199 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 21:00:54 crc kubenswrapper[4775]: I1125 21:00:54.646351 4775 scope.go:117] "RemoveContainer" containerID="56c1a980f65ee01529c1686e53fb7c8ed8f03620efdab583bff8d81c597ca89b" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.171006 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29401741-kvkb9"] Nov 25 21:01:00 crc kubenswrapper[4775]: E1125 21:01:00.172234 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae78140-5d75-4239-8a07-b7919cea5907" containerName="extract-utilities" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.172257 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae78140-5d75-4239-8a07-b7919cea5907" containerName="extract-utilities" Nov 25 21:01:00 crc kubenswrapper[4775]: E1125 21:01:00.172293 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae78140-5d75-4239-8a07-b7919cea5907" containerName="registry-server" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.172303 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae78140-5d75-4239-8a07-b7919cea5907" containerName="registry-server" Nov 25 21:01:00 crc kubenswrapper[4775]: E1125 21:01:00.172353 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae78140-5d75-4239-8a07-b7919cea5907" containerName="extract-content" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.172364 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae78140-5d75-4239-8a07-b7919cea5907" containerName="extract-content" Nov 25 21:01:00 crc kubenswrapper[4775]: E1125 21:01:00.172394 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccbe6dfa-b2b5-4415-aff9-e9884a80906c" containerName="collect-profiles" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.172405 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccbe6dfa-b2b5-4415-aff9-e9884a80906c" containerName="collect-profiles" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.172719 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccbe6dfa-b2b5-4415-aff9-e9884a80906c" containerName="collect-profiles" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.172764 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae78140-5d75-4239-8a07-b7919cea5907" containerName="registry-server" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.173779 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.184330 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401741-kvkb9"] Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.286518 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-fernet-keys\") pod \"keystone-cron-29401741-kvkb9\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.286609 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cztmr\" (UniqueName: \"kubernetes.io/projected/96a2c83e-656d-4769-92c6-234f61df378b-kube-api-access-cztmr\") pod \"keystone-cron-29401741-kvkb9\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.286686 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-combined-ca-bundle\") pod \"keystone-cron-29401741-kvkb9\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.286775 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-config-data\") pod \"keystone-cron-29401741-kvkb9\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.388203 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-config-data\") pod \"keystone-cron-29401741-kvkb9\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.388351 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-fernet-keys\") pod \"keystone-cron-29401741-kvkb9\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.388408 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cztmr\" (UniqueName: \"kubernetes.io/projected/96a2c83e-656d-4769-92c6-234f61df378b-kube-api-access-cztmr\") pod \"keystone-cron-29401741-kvkb9\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.388460 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-combined-ca-bundle\") pod \"keystone-cron-29401741-kvkb9\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.396203 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-fernet-keys\") pod \"keystone-cron-29401741-kvkb9\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.397311 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-config-data\") pod \"keystone-cron-29401741-kvkb9\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.402346 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-combined-ca-bundle\") pod \"keystone-cron-29401741-kvkb9\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.413507 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cztmr\" (UniqueName: \"kubernetes.io/projected/96a2c83e-656d-4769-92c6-234f61df378b-kube-api-access-cztmr\") pod \"keystone-cron-29401741-kvkb9\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:00 crc kubenswrapper[4775]: I1125 21:01:00.513168 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:01 crc kubenswrapper[4775]: I1125 21:01:01.077912 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401741-kvkb9"] Nov 25 21:01:01 crc kubenswrapper[4775]: I1125 21:01:01.847877 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:01:01 crc kubenswrapper[4775]: E1125 21:01:01.848541 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:01:01 crc kubenswrapper[4775]: I1125 21:01:01.995612 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401741-kvkb9" event={"ID":"96a2c83e-656d-4769-92c6-234f61df378b","Type":"ContainerStarted","Data":"2e1f7e0115fe787bf5ab3b3d1233dcf1ba75783bd3a1e5a83c86c542b06ed97f"} Nov 25 21:01:01 crc kubenswrapper[4775]: I1125 21:01:01.995687 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401741-kvkb9" event={"ID":"96a2c83e-656d-4769-92c6-234f61df378b","Type":"ContainerStarted","Data":"bfe2655e6add0f6ac46707b4ef94d607ad45dc7fe3c2be447a7723a1f3856588"} Nov 25 21:01:02 crc kubenswrapper[4775]: I1125 21:01:02.023882 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29401741-kvkb9" podStartSLOduration=2.023860407 podStartE2EDuration="2.023860407s" podCreationTimestamp="2025-11-25 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 21:01:02.013103346 +0000 UTC m=+5243.929465712" watchObservedRunningTime="2025-11-25 21:01:02.023860407 +0000 UTC m=+5243.940222773" Nov 25 21:01:04 crc kubenswrapper[4775]: I1125 21:01:04.034862 4775 generic.go:334] "Generic (PLEG): container finished" podID="96a2c83e-656d-4769-92c6-234f61df378b" containerID="2e1f7e0115fe787bf5ab3b3d1233dcf1ba75783bd3a1e5a83c86c542b06ed97f" exitCode=0 Nov 25 21:01:04 crc kubenswrapper[4775]: I1125 21:01:04.035221 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401741-kvkb9" event={"ID":"96a2c83e-656d-4769-92c6-234f61df378b","Type":"ContainerDied","Data":"2e1f7e0115fe787bf5ab3b3d1233dcf1ba75783bd3a1e5a83c86c542b06ed97f"} Nov 25 21:01:05 crc kubenswrapper[4775]: I1125 21:01:05.434207 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:05 crc kubenswrapper[4775]: I1125 21:01:05.587341 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cztmr\" (UniqueName: \"kubernetes.io/projected/96a2c83e-656d-4769-92c6-234f61df378b-kube-api-access-cztmr\") pod \"96a2c83e-656d-4769-92c6-234f61df378b\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " Nov 25 21:01:05 crc kubenswrapper[4775]: I1125 21:01:05.587416 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-config-data\") pod \"96a2c83e-656d-4769-92c6-234f61df378b\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " Nov 25 21:01:05 crc kubenswrapper[4775]: I1125 21:01:05.587537 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-fernet-keys\") pod \"96a2c83e-656d-4769-92c6-234f61df378b\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " Nov 25 21:01:05 crc kubenswrapper[4775]: I1125 21:01:05.587682 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-combined-ca-bundle\") pod \"96a2c83e-656d-4769-92c6-234f61df378b\" (UID: \"96a2c83e-656d-4769-92c6-234f61df378b\") " Nov 25 21:01:05 crc kubenswrapper[4775]: I1125 21:01:05.593769 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "96a2c83e-656d-4769-92c6-234f61df378b" (UID: "96a2c83e-656d-4769-92c6-234f61df378b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:01:05 crc kubenswrapper[4775]: I1125 21:01:05.612719 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a2c83e-656d-4769-92c6-234f61df378b-kube-api-access-cztmr" (OuterVolumeSpecName: "kube-api-access-cztmr") pod "96a2c83e-656d-4769-92c6-234f61df378b" (UID: "96a2c83e-656d-4769-92c6-234f61df378b"). InnerVolumeSpecName "kube-api-access-cztmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:01:05 crc kubenswrapper[4775]: I1125 21:01:05.659735 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96a2c83e-656d-4769-92c6-234f61df378b" (UID: "96a2c83e-656d-4769-92c6-234f61df378b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:01:05 crc kubenswrapper[4775]: I1125 21:01:05.669415 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-config-data" (OuterVolumeSpecName: "config-data") pod "96a2c83e-656d-4769-92c6-234f61df378b" (UID: "96a2c83e-656d-4769-92c6-234f61df378b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:01:05 crc kubenswrapper[4775]: I1125 21:01:05.690547 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cztmr\" (UniqueName: \"kubernetes.io/projected/96a2c83e-656d-4769-92c6-234f61df378b-kube-api-access-cztmr\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:05 crc kubenswrapper[4775]: I1125 21:01:05.690590 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:05 crc kubenswrapper[4775]: I1125 21:01:05.690604 4775 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:05 crc kubenswrapper[4775]: I1125 21:01:05.690615 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a2c83e-656d-4769-92c6-234f61df378b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:06 crc kubenswrapper[4775]: I1125 21:01:06.050533 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401741-kvkb9" event={"ID":"96a2c83e-656d-4769-92c6-234f61df378b","Type":"ContainerDied","Data":"bfe2655e6add0f6ac46707b4ef94d607ad45dc7fe3c2be447a7723a1f3856588"} Nov 25 21:01:06 crc kubenswrapper[4775]: I1125 21:01:06.050575 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe2655e6add0f6ac46707b4ef94d607ad45dc7fe3c2be447a7723a1f3856588" Nov 25 21:01:06 crc kubenswrapper[4775]: I1125 21:01:06.050593 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401741-kvkb9" Nov 25 21:01:06 crc kubenswrapper[4775]: I1125 21:01:06.847615 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:01:06 crc kubenswrapper[4775]: E1125 21:01:06.848063 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:01:08 crc kubenswrapper[4775]: I1125 21:01:08.881421 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 21:01:08 crc kubenswrapper[4775]: E1125 21:01:08.882149 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 21:01:14 crc kubenswrapper[4775]: I1125 21:01:14.847805 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:01:14 crc kubenswrapper[4775]: E1125 21:01:14.848503 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:01:21 crc kubenswrapper[4775]: I1125 21:01:21.847107 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:01:21 crc kubenswrapper[4775]: E1125 21:01:21.847948 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:01:22 crc kubenswrapper[4775]: I1125 21:01:22.851525 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 21:01:22 crc kubenswrapper[4775]: E1125 21:01:22.852427 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 21:01:24 crc kubenswrapper[4775]: I1125 21:01:24.912851 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j4vmh"] Nov 25 21:01:24 crc kubenswrapper[4775]: E1125 21:01:24.913620 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a2c83e-656d-4769-92c6-234f61df378b" containerName="keystone-cron" Nov 25 21:01:24 crc kubenswrapper[4775]: I1125 21:01:24.913637 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a2c83e-656d-4769-92c6-234f61df378b" containerName="keystone-cron" Nov 25 21:01:24 crc kubenswrapper[4775]: I1125 21:01:24.913917 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a2c83e-656d-4769-92c6-234f61df378b" containerName="keystone-cron" Nov 25 21:01:24 crc kubenswrapper[4775]: I1125 21:01:24.915609 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:24 crc kubenswrapper[4775]: I1125 21:01:24.941711 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j4vmh"] Nov 25 21:01:25 crc kubenswrapper[4775]: I1125 21:01:25.057250 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-utilities\") pod \"community-operators-j4vmh\" (UID: \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\") " pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:25 crc kubenswrapper[4775]: I1125 21:01:25.057788 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-catalog-content\") pod \"community-operators-j4vmh\" (UID: \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\") " pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:25 crc kubenswrapper[4775]: I1125 21:01:25.057851 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdxrz\" (UniqueName: \"kubernetes.io/projected/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-kube-api-access-fdxrz\") pod \"community-operators-j4vmh\" (UID: \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\") " pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:25 crc kubenswrapper[4775]: I1125 21:01:25.159173 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-utilities\") pod \"community-operators-j4vmh\" (UID: \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\") " pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:25 crc kubenswrapper[4775]: I1125 21:01:25.159310 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-catalog-content\") pod \"community-operators-j4vmh\" (UID: \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\") " pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:25 crc kubenswrapper[4775]: I1125 21:01:25.159347 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdxrz\" (UniqueName: \"kubernetes.io/projected/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-kube-api-access-fdxrz\") pod \"community-operators-j4vmh\" (UID: \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\") " pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:25 crc kubenswrapper[4775]: I1125 21:01:25.159715 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-utilities\") pod \"community-operators-j4vmh\" (UID: \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\") " pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:25 crc kubenswrapper[4775]: I1125 21:01:25.159808 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-catalog-content\") pod \"community-operators-j4vmh\" (UID: \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\") " pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:25 crc kubenswrapper[4775]: I1125 21:01:25.186830 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdxrz\" (UniqueName: \"kubernetes.io/projected/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-kube-api-access-fdxrz\") pod \"community-operators-j4vmh\" (UID: \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\") " pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:25 crc kubenswrapper[4775]: I1125 21:01:25.251215 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:25 crc kubenswrapper[4775]: I1125 21:01:25.814817 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j4vmh"] Nov 25 21:01:26 crc kubenswrapper[4775]: I1125 21:01:26.228611 4775 generic.go:334] "Generic (PLEG): container finished" podID="26756de9-6dc3-4d28-b0c2-a0d935dc75b9" containerID="f2fd6fdc5b9be6cc0597c4cbb84efe8ed0e954b0d6ad7d715225b76996e369ca" exitCode=0 Nov 25 21:01:26 crc kubenswrapper[4775]: I1125 21:01:26.228685 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4vmh" event={"ID":"26756de9-6dc3-4d28-b0c2-a0d935dc75b9","Type":"ContainerDied","Data":"f2fd6fdc5b9be6cc0597c4cbb84efe8ed0e954b0d6ad7d715225b76996e369ca"} Nov 25 21:01:26 crc kubenswrapper[4775]: I1125 21:01:26.229019 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4vmh" event={"ID":"26756de9-6dc3-4d28-b0c2-a0d935dc75b9","Type":"ContainerStarted","Data":"295fd956460e2efd330bf37bcdc2f7d403f07773dee2f68374ae6e5431fb2849"} Nov 25 21:01:27 crc kubenswrapper[4775]: I1125 21:01:27.241774 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4vmh" event={"ID":"26756de9-6dc3-4d28-b0c2-a0d935dc75b9","Type":"ContainerStarted","Data":"963617eb09f52a7a2a6176468ec682646395522bc20da37304888e715bb9bebf"} Nov 25 21:01:28 crc kubenswrapper[4775]: I1125 21:01:28.254380 4775 generic.go:334] "Generic (PLEG): container finished" podID="26756de9-6dc3-4d28-b0c2-a0d935dc75b9" containerID="963617eb09f52a7a2a6176468ec682646395522bc20da37304888e715bb9bebf" exitCode=0 Nov 25 21:01:28 crc kubenswrapper[4775]: I1125 21:01:28.254480 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4vmh" event={"ID":"26756de9-6dc3-4d28-b0c2-a0d935dc75b9","Type":"ContainerDied","Data":"963617eb09f52a7a2a6176468ec682646395522bc20da37304888e715bb9bebf"} Nov 25 21:01:28 crc kubenswrapper[4775]: I1125 21:01:28.854780 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:01:28 crc kubenswrapper[4775]: E1125 21:01:28.855637 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:01:29 crc kubenswrapper[4775]: I1125 21:01:29.272907 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4vmh" event={"ID":"26756de9-6dc3-4d28-b0c2-a0d935dc75b9","Type":"ContainerStarted","Data":"46fd05b000438896bb1321ab30d097d71dea916fe0dec7edfa94f13f201b08b5"} Nov 25 21:01:29 crc kubenswrapper[4775]: I1125 21:01:29.315159 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j4vmh" podStartSLOduration=2.857420331 podStartE2EDuration="5.315137454s" podCreationTimestamp="2025-11-25 21:01:24 +0000 UTC" firstStartedPulling="2025-11-25 21:01:26.230531409 +0000 UTC m=+5268.146893785" lastFinishedPulling="2025-11-25 21:01:28.688248542 +0000 UTC m=+5270.604610908" observedRunningTime="2025-11-25 21:01:29.293743696 +0000 UTC m=+5271.210106092" watchObservedRunningTime="2025-11-25 21:01:29.315137454 +0000 UTC m=+5271.231499820" Nov 25 21:01:32 crc kubenswrapper[4775]: I1125 21:01:32.853967 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:01:32 crc kubenswrapper[4775]: E1125 21:01:32.854865 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:01:35 crc kubenswrapper[4775]: I1125 21:01:35.251512 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:35 crc kubenswrapper[4775]: I1125 21:01:35.251895 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:35 crc kubenswrapper[4775]: I1125 21:01:35.751033 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:35 crc kubenswrapper[4775]: I1125 21:01:35.814151 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:35 crc kubenswrapper[4775]: I1125 21:01:35.847226 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 21:01:35 crc kubenswrapper[4775]: E1125 21:01:35.847794 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 21:01:37 crc kubenswrapper[4775]: I1125 21:01:37.461051 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j4vmh"] Nov 25 21:01:37 crc kubenswrapper[4775]: I1125 21:01:37.461348 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j4vmh" podUID="26756de9-6dc3-4d28-b0c2-a0d935dc75b9" containerName="registry-server" containerID="cri-o://46fd05b000438896bb1321ab30d097d71dea916fe0dec7edfa94f13f201b08b5" gracePeriod=2 Nov 25 21:01:38 crc kubenswrapper[4775]: I1125 21:01:38.369950 4775 generic.go:334] "Generic (PLEG): container finished" podID="26756de9-6dc3-4d28-b0c2-a0d935dc75b9" containerID="46fd05b000438896bb1321ab30d097d71dea916fe0dec7edfa94f13f201b08b5" exitCode=0 Nov 25 21:01:38 crc kubenswrapper[4775]: I1125 21:01:38.370009 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4vmh" event={"ID":"26756de9-6dc3-4d28-b0c2-a0d935dc75b9","Type":"ContainerDied","Data":"46fd05b000438896bb1321ab30d097d71dea916fe0dec7edfa94f13f201b08b5"} Nov 25 21:01:38 crc kubenswrapper[4775]: I1125 21:01:38.538889 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:38 crc kubenswrapper[4775]: I1125 21:01:38.666050 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-utilities\") pod \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\" (UID: \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\") " Nov 25 21:01:38 crc kubenswrapper[4775]: I1125 21:01:38.666382 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdxrz\" (UniqueName: \"kubernetes.io/projected/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-kube-api-access-fdxrz\") pod \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\" (UID: \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\") " Nov 25 21:01:38 crc kubenswrapper[4775]: I1125 21:01:38.666555 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-catalog-content\") pod \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\" (UID: \"26756de9-6dc3-4d28-b0c2-a0d935dc75b9\") " Nov 25 21:01:38 crc kubenswrapper[4775]: I1125 21:01:38.672037 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-kube-api-access-fdxrz" (OuterVolumeSpecName: "kube-api-access-fdxrz") pod "26756de9-6dc3-4d28-b0c2-a0d935dc75b9" (UID: "26756de9-6dc3-4d28-b0c2-a0d935dc75b9"). InnerVolumeSpecName "kube-api-access-fdxrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:01:38 crc kubenswrapper[4775]: I1125 21:01:38.672355 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-utilities" (OuterVolumeSpecName: "utilities") pod "26756de9-6dc3-4d28-b0c2-a0d935dc75b9" (UID: "26756de9-6dc3-4d28-b0c2-a0d935dc75b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:01:38 crc kubenswrapper[4775]: I1125 21:01:38.722665 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26756de9-6dc3-4d28-b0c2-a0d935dc75b9" (UID: "26756de9-6dc3-4d28-b0c2-a0d935dc75b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:01:38 crc kubenswrapper[4775]: I1125 21:01:38.769552 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdxrz\" (UniqueName: \"kubernetes.io/projected/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-kube-api-access-fdxrz\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:38 crc kubenswrapper[4775]: I1125 21:01:38.770092 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:38 crc kubenswrapper[4775]: I1125 21:01:38.770113 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26756de9-6dc3-4d28-b0c2-a0d935dc75b9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:39 crc kubenswrapper[4775]: I1125 21:01:39.380079 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4vmh" event={"ID":"26756de9-6dc3-4d28-b0c2-a0d935dc75b9","Type":"ContainerDied","Data":"295fd956460e2efd330bf37bcdc2f7d403f07773dee2f68374ae6e5431fb2849"} Nov 25 21:01:39 crc kubenswrapper[4775]: I1125 21:01:39.380467 4775 scope.go:117] "RemoveContainer" containerID="46fd05b000438896bb1321ab30d097d71dea916fe0dec7edfa94f13f201b08b5" Nov 25 21:01:39 crc kubenswrapper[4775]: I1125 21:01:39.380178 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j4vmh" Nov 25 21:01:39 crc kubenswrapper[4775]: I1125 21:01:39.401369 4775 scope.go:117] "RemoveContainer" containerID="963617eb09f52a7a2a6176468ec682646395522bc20da37304888e715bb9bebf" Nov 25 21:01:39 crc kubenswrapper[4775]: I1125 21:01:39.404927 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j4vmh"] Nov 25 21:01:39 crc kubenswrapper[4775]: I1125 21:01:39.414719 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j4vmh"] Nov 25 21:01:39 crc kubenswrapper[4775]: I1125 21:01:39.420905 4775 scope.go:117] "RemoveContainer" containerID="f2fd6fdc5b9be6cc0597c4cbb84efe8ed0e954b0d6ad7d715225b76996e369ca" Nov 25 21:01:39 crc kubenswrapper[4775]: I1125 21:01:39.849215 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:01:39 crc kubenswrapper[4775]: E1125 21:01:39.851474 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:01:40 crc kubenswrapper[4775]: I1125 21:01:40.861974 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26756de9-6dc3-4d28-b0c2-a0d935dc75b9" path="/var/lib/kubelet/pods/26756de9-6dc3-4d28-b0c2-a0d935dc75b9/volumes" Nov 25 21:01:47 crc kubenswrapper[4775]: I1125 21:01:47.847500 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:01:47 crc kubenswrapper[4775]: E1125 21:01:47.848296 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:01:50 crc kubenswrapper[4775]: I1125 21:01:50.847343 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 21:01:50 crc kubenswrapper[4775]: E1125 21:01:50.848259 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 21:01:54 crc kubenswrapper[4775]: I1125 21:01:54.846538 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:01:54 crc kubenswrapper[4775]: E1125 21:01:54.847227 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:01:59 crc kubenswrapper[4775]: I1125 21:01:59.847364 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:01:59 crc kubenswrapper[4775]: E1125 21:01:59.849625 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:02:04 crc kubenswrapper[4775]: I1125 21:02:04.848089 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 21:02:04 crc kubenswrapper[4775]: E1125 21:02:04.849246 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w4zbm_openshift-machine-config-operator(bdb8b79f-4ccd-4606-8f27-e26301ffc656)\"" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" Nov 25 21:02:07 crc kubenswrapper[4775]: I1125 21:02:07.847802 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:02:07 crc kubenswrapper[4775]: E1125 21:02:07.851019 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:02:13 crc kubenswrapper[4775]: I1125 21:02:13.847497 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:02:13 crc kubenswrapper[4775]: E1125 21:02:13.848295 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:02:19 crc kubenswrapper[4775]: I1125 21:02:19.848264 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 21:02:20 crc kubenswrapper[4775]: I1125 21:02:20.875215 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"2e5dc574bf7e9f5ff295fba89c7f5111e700c07e045e03c467d5fd7a2b8aff21"} Nov 25 21:02:21 crc kubenswrapper[4775]: I1125 21:02:21.848161 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:02:21 crc kubenswrapper[4775]: E1125 21:02:21.848847 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:02:24 crc kubenswrapper[4775]: I1125 21:02:24.847331 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:02:24 crc kubenswrapper[4775]: E1125 21:02:24.848372 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:02:25 crc kubenswrapper[4775]: I1125 21:02:25.924703 4775 generic.go:334] "Generic (PLEG): container finished" podID="310ceeb8-f3c1-4fde-8667-8c8f837be80b" containerID="0c4f3c7bf5238737bc73c7c049023eefe0f8b562f202efbc7ccf3d02021b8c95" exitCode=0 Nov 25 21:02:25 crc kubenswrapper[4775]: I1125 21:02:25.924846 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-24v8q/must-gather-5qzmq" event={"ID":"310ceeb8-f3c1-4fde-8667-8c8f837be80b","Type":"ContainerDied","Data":"0c4f3c7bf5238737bc73c7c049023eefe0f8b562f202efbc7ccf3d02021b8c95"} Nov 25 21:02:25 crc kubenswrapper[4775]: I1125 21:02:25.926088 4775 scope.go:117] "RemoveContainer" containerID="0c4f3c7bf5238737bc73c7c049023eefe0f8b562f202efbc7ccf3d02021b8c95" Nov 25 21:02:26 crc kubenswrapper[4775]: I1125 21:02:26.336595 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-24v8q_must-gather-5qzmq_310ceeb8-f3c1-4fde-8667-8c8f837be80b/gather/0.log" Nov 25 21:02:33 crc kubenswrapper[4775]: I1125 21:02:33.781352 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-24v8q/must-gather-5qzmq"] Nov 25 21:02:33 crc kubenswrapper[4775]: I1125 21:02:33.783768 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-24v8q/must-gather-5qzmq" podUID="310ceeb8-f3c1-4fde-8667-8c8f837be80b" containerName="copy" containerID="cri-o://5423c0197716a5e76ab9d83fc6058ddb1c39d34bcb48c7a18da19ff9289fd1e7" gracePeriod=2 Nov 25 21:02:33 crc kubenswrapper[4775]: I1125 21:02:33.791639 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-24v8q/must-gather-5qzmq"] Nov 25 21:02:33 crc kubenswrapper[4775]: I1125 21:02:33.847721 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:02:33 crc kubenswrapper[4775]: E1125 21:02:33.847953 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:02:34 crc kubenswrapper[4775]: I1125 21:02:34.032224 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-24v8q_must-gather-5qzmq_310ceeb8-f3c1-4fde-8667-8c8f837be80b/copy/0.log" Nov 25 21:02:34 crc kubenswrapper[4775]: I1125 21:02:34.032987 4775 generic.go:334] "Generic (PLEG): container finished" podID="310ceeb8-f3c1-4fde-8667-8c8f837be80b" containerID="5423c0197716a5e76ab9d83fc6058ddb1c39d34bcb48c7a18da19ff9289fd1e7" exitCode=143 Nov 25 21:02:34 crc kubenswrapper[4775]: I1125 21:02:34.255556 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-24v8q_must-gather-5qzmq_310ceeb8-f3c1-4fde-8667-8c8f837be80b/copy/0.log" Nov 25 21:02:34 crc kubenswrapper[4775]: I1125 21:02:34.255982 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-24v8q/must-gather-5qzmq" Nov 25 21:02:34 crc kubenswrapper[4775]: I1125 21:02:34.354578 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fwms\" (UniqueName: \"kubernetes.io/projected/310ceeb8-f3c1-4fde-8667-8c8f837be80b-kube-api-access-8fwms\") pod \"310ceeb8-f3c1-4fde-8667-8c8f837be80b\" (UID: \"310ceeb8-f3c1-4fde-8667-8c8f837be80b\") " Nov 25 21:02:34 crc kubenswrapper[4775]: I1125 21:02:34.354942 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/310ceeb8-f3c1-4fde-8667-8c8f837be80b-must-gather-output\") pod \"310ceeb8-f3c1-4fde-8667-8c8f837be80b\" (UID: \"310ceeb8-f3c1-4fde-8667-8c8f837be80b\") " Nov 25 21:02:34 crc kubenswrapper[4775]: I1125 21:02:34.360349 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310ceeb8-f3c1-4fde-8667-8c8f837be80b-kube-api-access-8fwms" (OuterVolumeSpecName: "kube-api-access-8fwms") pod "310ceeb8-f3c1-4fde-8667-8c8f837be80b" (UID: "310ceeb8-f3c1-4fde-8667-8c8f837be80b"). InnerVolumeSpecName "kube-api-access-8fwms". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:02:34 crc kubenswrapper[4775]: I1125 21:02:34.457958 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fwms\" (UniqueName: \"kubernetes.io/projected/310ceeb8-f3c1-4fde-8667-8c8f837be80b-kube-api-access-8fwms\") on node \"crc\" DevicePath \"\"" Nov 25 21:02:34 crc kubenswrapper[4775]: I1125 21:02:34.531839 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310ceeb8-f3c1-4fde-8667-8c8f837be80b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "310ceeb8-f3c1-4fde-8667-8c8f837be80b" (UID: "310ceeb8-f3c1-4fde-8667-8c8f837be80b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:02:34 crc kubenswrapper[4775]: I1125 21:02:34.560080 4775 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/310ceeb8-f3c1-4fde-8667-8c8f837be80b-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 21:02:34 crc kubenswrapper[4775]: I1125 21:02:34.858133 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310ceeb8-f3c1-4fde-8667-8c8f837be80b" path="/var/lib/kubelet/pods/310ceeb8-f3c1-4fde-8667-8c8f837be80b/volumes" Nov 25 21:02:35 crc kubenswrapper[4775]: I1125 21:02:35.043204 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-24v8q_must-gather-5qzmq_310ceeb8-f3c1-4fde-8667-8c8f837be80b/copy/0.log" Nov 25 21:02:35 crc kubenswrapper[4775]: I1125 21:02:35.043977 4775 scope.go:117] "RemoveContainer" containerID="5423c0197716a5e76ab9d83fc6058ddb1c39d34bcb48c7a18da19ff9289fd1e7" Nov 25 21:02:35 crc kubenswrapper[4775]: I1125 21:02:35.044106 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-24v8q/must-gather-5qzmq" Nov 25 21:02:35 crc kubenswrapper[4775]: I1125 21:02:35.065779 4775 scope.go:117] "RemoveContainer" containerID="0c4f3c7bf5238737bc73c7c049023eefe0f8b562f202efbc7ccf3d02021b8c95" Nov 25 21:02:36 crc kubenswrapper[4775]: I1125 21:02:36.847986 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:02:36 crc kubenswrapper[4775]: E1125 21:02:36.848923 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.775737 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f4svh"] Nov 25 21:02:44 crc kubenswrapper[4775]: E1125 21:02:44.776946 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26756de9-6dc3-4d28-b0c2-a0d935dc75b9" containerName="extract-utilities" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.776968 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="26756de9-6dc3-4d28-b0c2-a0d935dc75b9" containerName="extract-utilities" Nov 25 21:02:44 crc kubenswrapper[4775]: E1125 21:02:44.776985 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310ceeb8-f3c1-4fde-8667-8c8f837be80b" containerName="gather" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.776997 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="310ceeb8-f3c1-4fde-8667-8c8f837be80b" containerName="gather" Nov 25 21:02:44 crc kubenswrapper[4775]: E1125 21:02:44.777016 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26756de9-6dc3-4d28-b0c2-a0d935dc75b9" containerName="registry-server" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.777029 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="26756de9-6dc3-4d28-b0c2-a0d935dc75b9" containerName="registry-server" Nov 25 21:02:44 crc kubenswrapper[4775]: E1125 21:02:44.777069 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26756de9-6dc3-4d28-b0c2-a0d935dc75b9" containerName="extract-content" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.777081 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="26756de9-6dc3-4d28-b0c2-a0d935dc75b9" containerName="extract-content" Nov 25 21:02:44 crc kubenswrapper[4775]: E1125 21:02:44.777107 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310ceeb8-f3c1-4fde-8667-8c8f837be80b" containerName="copy" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.777253 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="310ceeb8-f3c1-4fde-8667-8c8f837be80b" containerName="copy" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.777604 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="310ceeb8-f3c1-4fde-8667-8c8f837be80b" containerName="gather" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.777636 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="310ceeb8-f3c1-4fde-8667-8c8f837be80b" containerName="copy" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.777687 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="26756de9-6dc3-4d28-b0c2-a0d935dc75b9" containerName="registry-server" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.781335 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.789806 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4svh"] Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.890654 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115d7aa3-56c5-4145-ba7a-4d0d995228a9-catalog-content\") pod \"redhat-operators-f4svh\" (UID: \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\") " pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.890829 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115d7aa3-56c5-4145-ba7a-4d0d995228a9-utilities\") pod \"redhat-operators-f4svh\" (UID: \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\") " pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.890868 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvsr\" (UniqueName: \"kubernetes.io/projected/115d7aa3-56c5-4145-ba7a-4d0d995228a9-kube-api-access-6vvsr\") pod \"redhat-operators-f4svh\" (UID: \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\") " pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.993795 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115d7aa3-56c5-4145-ba7a-4d0d995228a9-catalog-content\") pod \"redhat-operators-f4svh\" (UID: \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\") " pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.994267 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115d7aa3-56c5-4145-ba7a-4d0d995228a9-utilities\") pod \"redhat-operators-f4svh\" (UID: \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\") " pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.994334 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115d7aa3-56c5-4145-ba7a-4d0d995228a9-catalog-content\") pod \"redhat-operators-f4svh\" (UID: \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\") " pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.994369 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvsr\" (UniqueName: \"kubernetes.io/projected/115d7aa3-56c5-4145-ba7a-4d0d995228a9-kube-api-access-6vvsr\") pod \"redhat-operators-f4svh\" (UID: \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\") " pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:44 crc kubenswrapper[4775]: I1125 21:02:44.994627 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115d7aa3-56c5-4145-ba7a-4d0d995228a9-utilities\") pod \"redhat-operators-f4svh\" (UID: \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\") " pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:45 crc kubenswrapper[4775]: I1125 21:02:45.015107 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvsr\" (UniqueName: \"kubernetes.io/projected/115d7aa3-56c5-4145-ba7a-4d0d995228a9-kube-api-access-6vvsr\") pod \"redhat-operators-f4svh\" (UID: \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\") " pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:45 crc kubenswrapper[4775]: I1125 21:02:45.121335 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:45 crc kubenswrapper[4775]: I1125 21:02:45.683996 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4svh"] Nov 25 21:02:46 crc kubenswrapper[4775]: I1125 21:02:46.185909 4775 generic.go:334] "Generic (PLEG): container finished" podID="115d7aa3-56c5-4145-ba7a-4d0d995228a9" containerID="5a760e42449f6d678378a1085c82aeb3d694ade99578668cd7383154da1d60fa" exitCode=0 Nov 25 21:02:46 crc kubenswrapper[4775]: I1125 21:02:46.186222 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4svh" event={"ID":"115d7aa3-56c5-4145-ba7a-4d0d995228a9","Type":"ContainerDied","Data":"5a760e42449f6d678378a1085c82aeb3d694ade99578668cd7383154da1d60fa"} Nov 25 21:02:46 crc kubenswrapper[4775]: I1125 21:02:46.186253 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4svh" event={"ID":"115d7aa3-56c5-4145-ba7a-4d0d995228a9","Type":"ContainerStarted","Data":"f397861cde44bd214250baaab0370d926e6857bfc8c31059750a1e024d2d8f6b"} Nov 25 21:02:47 crc kubenswrapper[4775]: I1125 21:02:47.202418 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4svh" event={"ID":"115d7aa3-56c5-4145-ba7a-4d0d995228a9","Type":"ContainerStarted","Data":"9c23ae3fcd87c1e560c15763c317237e51c03ee68b2657dbc576ed407c0b7927"} Nov 25 21:02:47 crc kubenswrapper[4775]: I1125 21:02:47.847534 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:02:47 crc kubenswrapper[4775]: E1125 21:02:47.847941 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:02:48 crc kubenswrapper[4775]: I1125 21:02:48.214872 4775 generic.go:334] "Generic (PLEG): container finished" podID="115d7aa3-56c5-4145-ba7a-4d0d995228a9" containerID="9c23ae3fcd87c1e560c15763c317237e51c03ee68b2657dbc576ed407c0b7927" exitCode=0 Nov 25 21:02:48 crc kubenswrapper[4775]: I1125 21:02:48.214923 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4svh" event={"ID":"115d7aa3-56c5-4145-ba7a-4d0d995228a9","Type":"ContainerDied","Data":"9c23ae3fcd87c1e560c15763c317237e51c03ee68b2657dbc576ed407c0b7927"} Nov 25 21:02:48 crc kubenswrapper[4775]: I1125 21:02:48.850446 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:02:48 crc kubenswrapper[4775]: E1125 21:02:48.851015 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:02:49 crc kubenswrapper[4775]: I1125 21:02:49.227674 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4svh" event={"ID":"115d7aa3-56c5-4145-ba7a-4d0d995228a9","Type":"ContainerStarted","Data":"89aab9c6988f2d4800945ff475c60de090e0ebaa5ae24fb41efa3a36fd74f7da"} Nov 25 21:02:49 crc kubenswrapper[4775]: I1125 21:02:49.246825 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f4svh" podStartSLOduration=2.543508992 podStartE2EDuration="5.246802542s" podCreationTimestamp="2025-11-25 21:02:44 +0000 UTC" firstStartedPulling="2025-11-25 21:02:46.188770106 +0000 UTC m=+5348.105132472" lastFinishedPulling="2025-11-25 21:02:48.892063656 +0000 UTC m=+5350.808426022" observedRunningTime="2025-11-25 21:02:49.245293352 +0000 UTC m=+5351.161655718" watchObservedRunningTime="2025-11-25 21:02:49.246802542 +0000 UTC m=+5351.163164908" Nov 25 21:02:55 crc kubenswrapper[4775]: I1125 21:02:55.122399 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:55 crc kubenswrapper[4775]: I1125 21:02:55.123053 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:55 crc kubenswrapper[4775]: I1125 21:02:55.174052 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:55 crc kubenswrapper[4775]: I1125 21:02:55.324233 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:55 crc kubenswrapper[4775]: I1125 21:02:55.432832 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4svh"] Nov 25 21:02:57 crc kubenswrapper[4775]: I1125 21:02:57.306959 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f4svh" podUID="115d7aa3-56c5-4145-ba7a-4d0d995228a9" containerName="registry-server" containerID="cri-o://89aab9c6988f2d4800945ff475c60de090e0ebaa5ae24fb41efa3a36fd74f7da" gracePeriod=2 Nov 25 21:02:58 crc kubenswrapper[4775]: I1125 21:02:58.856776 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:02:58 crc kubenswrapper[4775]: E1125 21:02:58.857213 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:02:59 crc kubenswrapper[4775]: I1125 21:02:59.336949 4775 generic.go:334] "Generic (PLEG): container finished" podID="115d7aa3-56c5-4145-ba7a-4d0d995228a9" containerID="89aab9c6988f2d4800945ff475c60de090e0ebaa5ae24fb41efa3a36fd74f7da" exitCode=0 Nov 25 21:02:59 crc kubenswrapper[4775]: I1125 21:02:59.337033 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4svh" event={"ID":"115d7aa3-56c5-4145-ba7a-4d0d995228a9","Type":"ContainerDied","Data":"89aab9c6988f2d4800945ff475c60de090e0ebaa5ae24fb41efa3a36fd74f7da"} Nov 25 21:02:59 crc kubenswrapper[4775]: I1125 21:02:59.611424 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:02:59 crc kubenswrapper[4775]: I1125 21:02:59.724843 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vvsr\" (UniqueName: \"kubernetes.io/projected/115d7aa3-56c5-4145-ba7a-4d0d995228a9-kube-api-access-6vvsr\") pod \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\" (UID: \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\") " Nov 25 21:02:59 crc kubenswrapper[4775]: I1125 21:02:59.724897 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115d7aa3-56c5-4145-ba7a-4d0d995228a9-catalog-content\") pod \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\" (UID: \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\") " Nov 25 21:02:59 crc kubenswrapper[4775]: I1125 21:02:59.725250 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115d7aa3-56c5-4145-ba7a-4d0d995228a9-utilities\") pod \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\" (UID: \"115d7aa3-56c5-4145-ba7a-4d0d995228a9\") " Nov 25 21:02:59 crc kubenswrapper[4775]: I1125 21:02:59.726689 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/115d7aa3-56c5-4145-ba7a-4d0d995228a9-utilities" (OuterVolumeSpecName: "utilities") pod "115d7aa3-56c5-4145-ba7a-4d0d995228a9" (UID: "115d7aa3-56c5-4145-ba7a-4d0d995228a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:02:59 crc kubenswrapper[4775]: I1125 21:02:59.731625 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115d7aa3-56c5-4145-ba7a-4d0d995228a9-kube-api-access-6vvsr" (OuterVolumeSpecName: "kube-api-access-6vvsr") pod "115d7aa3-56c5-4145-ba7a-4d0d995228a9" (UID: "115d7aa3-56c5-4145-ba7a-4d0d995228a9"). InnerVolumeSpecName "kube-api-access-6vvsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:02:59 crc kubenswrapper[4775]: I1125 21:02:59.828453 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115d7aa3-56c5-4145-ba7a-4d0d995228a9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:02:59 crc kubenswrapper[4775]: I1125 21:02:59.828490 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vvsr\" (UniqueName: \"kubernetes.io/projected/115d7aa3-56c5-4145-ba7a-4d0d995228a9-kube-api-access-6vvsr\") on node \"crc\" DevicePath \"\"" Nov 25 21:02:59 crc kubenswrapper[4775]: I1125 21:02:59.855969 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/115d7aa3-56c5-4145-ba7a-4d0d995228a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "115d7aa3-56c5-4145-ba7a-4d0d995228a9" (UID: "115d7aa3-56c5-4145-ba7a-4d0d995228a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:02:59 crc kubenswrapper[4775]: I1125 21:02:59.930910 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115d7aa3-56c5-4145-ba7a-4d0d995228a9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:03:00 crc kubenswrapper[4775]: I1125 21:03:00.353875 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4svh" event={"ID":"115d7aa3-56c5-4145-ba7a-4d0d995228a9","Type":"ContainerDied","Data":"f397861cde44bd214250baaab0370d926e6857bfc8c31059750a1e024d2d8f6b"} Nov 25 21:03:00 crc kubenswrapper[4775]: I1125 21:03:00.353953 4775 scope.go:117] "RemoveContainer" containerID="89aab9c6988f2d4800945ff475c60de090e0ebaa5ae24fb41efa3a36fd74f7da" Nov 25 21:03:00 crc kubenswrapper[4775]: I1125 21:03:00.354189 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4svh" Nov 25 21:03:00 crc kubenswrapper[4775]: I1125 21:03:00.388895 4775 scope.go:117] "RemoveContainer" containerID="9c23ae3fcd87c1e560c15763c317237e51c03ee68b2657dbc576ed407c0b7927" Nov 25 21:03:00 crc kubenswrapper[4775]: I1125 21:03:00.419415 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4svh"] Nov 25 21:03:00 crc kubenswrapper[4775]: I1125 21:03:00.435052 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f4svh"] Nov 25 21:03:00 crc kubenswrapper[4775]: I1125 21:03:00.435571 4775 scope.go:117] "RemoveContainer" containerID="5a760e42449f6d678378a1085c82aeb3d694ade99578668cd7383154da1d60fa" Nov 25 21:03:00 crc kubenswrapper[4775]: I1125 21:03:00.849106 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:03:00 crc kubenswrapper[4775]: E1125 21:03:00.849357 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:03:00 crc kubenswrapper[4775]: I1125 21:03:00.859305 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115d7aa3-56c5-4145-ba7a-4d0d995228a9" path="/var/lib/kubelet/pods/115d7aa3-56c5-4145-ba7a-4d0d995228a9/volumes" Nov 25 21:03:10 crc kubenswrapper[4775]: I1125 21:03:10.847995 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:03:10 crc kubenswrapper[4775]: E1125 21:03:10.849092 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:03:14 crc kubenswrapper[4775]: I1125 21:03:14.848003 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:03:15 crc kubenswrapper[4775]: I1125 21:03:15.520100 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerStarted","Data":"12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367"} Nov 25 21:03:17 crc kubenswrapper[4775]: I1125 21:03:17.540751 4775 generic.go:334] "Generic (PLEG): container finished" podID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" exitCode=1 Nov 25 21:03:17 crc kubenswrapper[4775]: I1125 21:03:17.540824 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0a88473d-4ba5-4147-bf60-128f0b7ea8f6","Type":"ContainerDied","Data":"12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367"} Nov 25 21:03:17 crc kubenswrapper[4775]: I1125 21:03:17.541071 4775 scope.go:117] "RemoveContainer" containerID="dd58d59271a76b6a864cdfc817c9ecfdcc17809ead0ecf01e9d03f17b0ed915d" Nov 25 21:03:17 crc kubenswrapper[4775]: I1125 21:03:17.541553 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:03:17 crc kubenswrapper[4775]: E1125 21:03:17.542000 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:03:23 crc kubenswrapper[4775]: I1125 21:03:23.104582 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 21:03:23 crc kubenswrapper[4775]: I1125 21:03:23.105258 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 21:03:23 crc kubenswrapper[4775]: I1125 21:03:23.105278 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 21:03:23 crc kubenswrapper[4775]: I1125 21:03:23.106276 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:03:23 crc kubenswrapper[4775]: E1125 21:03:23.106729 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:03:23 crc kubenswrapper[4775]: I1125 21:03:23.847979 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:03:23 crc kubenswrapper[4775]: E1125 21:03:23.848373 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:03:36 crc kubenswrapper[4775]: I1125 21:03:36.848270 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:03:36 crc kubenswrapper[4775]: E1125 21:03:36.849464 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:03:37 crc kubenswrapper[4775]: I1125 21:03:37.846795 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:03:37 crc kubenswrapper[4775]: E1125 21:03:37.847306 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:03:50 crc kubenswrapper[4775]: I1125 21:03:50.847239 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:03:50 crc kubenswrapper[4775]: E1125 21:03:50.847869 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:03:50 crc kubenswrapper[4775]: I1125 21:03:50.847905 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:03:50 crc kubenswrapper[4775]: E1125 21:03:50.848441 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:03:54 crc kubenswrapper[4775]: I1125 21:03:54.834686 4775 scope.go:117] "RemoveContainer" containerID="636067f49a95fdd2cb7f882b5436803b7056fa407ad401b36b891fdab81526e9" Nov 25 21:04:01 crc kubenswrapper[4775]: I1125 21:04:01.848881 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:04:01 crc kubenswrapper[4775]: E1125 21:04:01.849780 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:04:04 crc kubenswrapper[4775]: I1125 21:04:04.848156 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:04:04 crc kubenswrapper[4775]: E1125 21:04:04.849381 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:04:12 crc kubenswrapper[4775]: I1125 21:04:12.848715 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:04:12 crc kubenswrapper[4775]: E1125 21:04:12.849754 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:04:16 crc kubenswrapper[4775]: I1125 21:04:16.847679 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:04:16 crc kubenswrapper[4775]: E1125 21:04:16.848919 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:04:24 crc kubenswrapper[4775]: I1125 21:04:24.847316 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:04:24 crc kubenswrapper[4775]: E1125 21:04:24.852736 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:04:31 crc kubenswrapper[4775]: I1125 21:04:31.849671 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:04:31 crc kubenswrapper[4775]: E1125 21:04:31.851076 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:04:35 crc kubenswrapper[4775]: I1125 21:04:35.847293 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:04:35 crc kubenswrapper[4775]: E1125 21:04:35.847997 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:04:41 crc kubenswrapper[4775]: I1125 21:04:41.070821 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:04:41 crc kubenswrapper[4775]: I1125 21:04:41.071863 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:04:42 crc kubenswrapper[4775]: I1125 21:04:42.849225 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:04:42 crc kubenswrapper[4775]: E1125 21:04:42.850351 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:04:47 crc kubenswrapper[4775]: I1125 21:04:47.847600 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:04:47 crc kubenswrapper[4775]: E1125 21:04:47.848441 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:04:55 crc kubenswrapper[4775]: I1125 21:04:55.848060 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:04:55 crc kubenswrapper[4775]: E1125 21:04:55.849143 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:05:01 crc kubenswrapper[4775]: I1125 21:05:01.849606 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:05:01 crc kubenswrapper[4775]: E1125 21:05:01.850696 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:05:06 crc kubenswrapper[4775]: I1125 21:05:06.850229 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:05:06 crc kubenswrapper[4775]: E1125 21:05:06.851570 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:05:11 crc kubenswrapper[4775]: I1125 21:05:11.070720 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:05:11 crc kubenswrapper[4775]: I1125 21:05:11.071428 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:05:16 crc kubenswrapper[4775]: I1125 21:05:16.847678 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:05:16 crc kubenswrapper[4775]: E1125 21:05:16.850136 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:05:19 crc kubenswrapper[4775]: I1125 21:05:19.847687 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:05:19 crc kubenswrapper[4775]: E1125 21:05:19.848213 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:05:29 crc kubenswrapper[4775]: I1125 21:05:29.846900 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:05:29 crc kubenswrapper[4775]: E1125 21:05:29.847749 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:05:30 crc kubenswrapper[4775]: I1125 21:05:30.848272 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:05:30 crc kubenswrapper[4775]: E1125 21:05:30.849139 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:05:41 crc kubenswrapper[4775]: I1125 21:05:41.070883 4775 patch_prober.go:28] interesting pod/machine-config-daemon-w4zbm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:05:41 crc kubenswrapper[4775]: I1125 21:05:41.071471 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:05:41 crc kubenswrapper[4775]: I1125 21:05:41.071538 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" Nov 25 21:05:41 crc kubenswrapper[4775]: I1125 21:05:41.072683 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e5dc574bf7e9f5ff295fba89c7f5111e700c07e045e03c467d5fd7a2b8aff21"} pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 21:05:41 crc kubenswrapper[4775]: I1125 21:05:41.072788 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" podUID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerName="machine-config-daemon" containerID="cri-o://2e5dc574bf7e9f5ff295fba89c7f5111e700c07e045e03c467d5fd7a2b8aff21" gracePeriod=600 Nov 25 21:05:41 crc kubenswrapper[4775]: I1125 21:05:41.847180 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:05:41 crc kubenswrapper[4775]: E1125 21:05:41.848156 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-api\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-api pod=manila-api-0_openstack(a18f9ccb-ee60-48c8-9fe2-5a505036b958)\"" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" Nov 25 21:05:42 crc kubenswrapper[4775]: I1125 21:05:42.150782 4775 generic.go:334] "Generic (PLEG): container finished" podID="bdb8b79f-4ccd-4606-8f27-e26301ffc656" containerID="2e5dc574bf7e9f5ff295fba89c7f5111e700c07e045e03c467d5fd7a2b8aff21" exitCode=0 Nov 25 21:05:42 crc kubenswrapper[4775]: I1125 21:05:42.150842 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerDied","Data":"2e5dc574bf7e9f5ff295fba89c7f5111e700c07e045e03c467d5fd7a2b8aff21"} Nov 25 21:05:42 crc kubenswrapper[4775]: I1125 21:05:42.151204 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w4zbm" event={"ID":"bdb8b79f-4ccd-4606-8f27-e26301ffc656","Type":"ContainerStarted","Data":"477a56866059b4d4cbe75cd7e316bb24b4fee2db5eea2720da7c936837f7b786"} Nov 25 21:05:42 crc kubenswrapper[4775]: I1125 21:05:42.151228 4775 scope.go:117] "RemoveContainer" containerID="c83739bea9b9c7f002a96f92186d9399e104ca08fa9a52eb5ab2106bf320b886" Nov 25 21:05:42 crc kubenswrapper[4775]: I1125 21:05:42.849059 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:05:42 crc kubenswrapper[4775]: E1125 21:05:42.850548 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:05:55 crc kubenswrapper[4775]: I1125 21:05:55.847204 4775 scope.go:117] "RemoveContainer" containerID="de05966d9f4d52804dec04a11d7212bbb25a35c13c8f60eb53431a5277a5f6f9" Nov 25 21:05:56 crc kubenswrapper[4775]: I1125 21:05:56.303379 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a18f9ccb-ee60-48c8-9fe2-5a505036b958","Type":"ContainerStarted","Data":"20dc8132bea9e5583fef746c1eb15527c1e2b4aca9eb022f3e677eb92b3e6cc1"} Nov 25 21:05:56 crc kubenswrapper[4775]: I1125 21:05:56.304319 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 21:05:57 crc kubenswrapper[4775]: I1125 21:05:57.847296 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:05:57 crc kubenswrapper[4775]: E1125 21:05:57.847764 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:06:10 crc kubenswrapper[4775]: I1125 21:06:10.847026 4775 scope.go:117] "RemoveContainer" containerID="12a6d76a6e6fdb5a62d4f10c889682195080209a30e67afb7acc6f0dda8af367" Nov 25 21:06:10 crc kubenswrapper[4775]: E1125 21:06:10.848140 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-share\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manila-share pod=manila-share-share1-0_openstack(0a88473d-4ba5-4147-bf60-128f0b7ea8f6)\"" pod="openstack/manila-share-share1-0" podUID="0a88473d-4ba5-4147-bf60-128f0b7ea8f6" Nov 25 21:06:13 crc kubenswrapper[4775]: I1125 21:06:13.334359 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 21:06:13 crc kubenswrapper[4775]: I1125 21:06:13.360203 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-api-0" podUID="a18f9ccb-ee60-48c8-9fe2-5a505036b958" containerName="manila-api" probeResult="failure" output="HTTP probe failed with statuscode: 500"